Observation of Information Manipulations on EU Parliament Elections

Executive Summary

From the start of the year until early April 2024, detailed analysis has revealed the engagement of 16,902 troll accounts in related discussions, making up 12.10% of the total conversation. Media analysis during this period also detected 1,365 instances of media engagement, with 4.84% of these instances (66 mentions) linked to state-affiliated media from China and Russia.

In the lead-up to the European Parliament elections, the digital discourse is heavily influenced by orchestrated campaigns from troll groups, highlighting two primary developments: the purported rise of far-right movements within Europe and initiatives to counteract interference on social media by Russia, large tech firms, and other external actors. These troll-driven narratives foster a deliberate skepticism towards the threat of extremism and promote a robust endorsement of conservative ideologies, skewing public perception. Moreover, they amplify concerns over government censorship, question the integrity of mainstream media, and feed disillusionment with traditional outlets’ handling of topics like Russian propaganda, thereby sowing a deeper mistrust in the media as fair conveyors of information. The impending rollout of EU regulations aimed at curbing disinformation on platforms such as X, TikTok, and Facebook is presented by these trolls as a contentious issue, complicating efforts to balance election security with the maintenance of free speech rights.

Troll groups’ narratives, reflecting those in Russian and Chinese state-affiliated media, span a wide array of concerns impacting Europe, with a special focus on the continent’s declining living standards and extending to topics beyond the immediate scope of the European Parliament elections. These narratives delve into issues like Russia’s War in Ukraine, energy security, digital regulation, migration, and climate change, showcasing a thematic coherence that transcends borders and highlights key geopolitical and socio-economic challenges. This alignment underscores a concerted effort to shape public discourse around critical issues affecting Europe, revealing a strategic intersection of interests among troll groups and state media from Russia and China, particularly on matters such as energy security and geopolitical conflicts.

The narrative landscape of online trolling has been significantly influenced by two predominant troll groups, each advocating distinct agendas on platforms, Twitter and YouTube. On Twitter, one group has launched intense criticism against Joe Biden, scrutinizing his mental health and policy decisions, and has targeted Canadian Prime Minister Trudeau with accusations of economic mismanagement and media bias, casting Canada unfavorably. Conversely, on YouTube, the narrative pushed by another troll group expresses dissatisfaction with financial assistance to Ukraine, levels criticism at NATO for intensifying conflicts, and challenges U.S. foreign policy—particularly its support for Israel—advocating for President Biden to adopt a stance more sympathetic to Palestine.

As this intricate web of disinformation unfolds, similar strategies observed in the recent Taiwan presidential election and the TikTok banning event in the U.S. shed further light on the pervasive tactics employed. In both instances, narratives around the regulation of social media platforms echo those seen in the EU, with troll groups vocally championing ‘freedom of speech’ as a principal argument against regulation. Furthermore, when the discourse veers towards the rise of far-right ideologies, these groups skillfully divert the conversation by asserting that the EU should prioritize resolving other more critical issues. This tactic of distraction aligns with their broader strategy of undermining focused discussions on pressing matters. Additionally, a recurring pattern emerges where media outlets that highlight misinformation become targets themselves, accused of bias or incompetence. This tactic not only challenges the credibility of the media but also attempts to dilute the severity of misinformation issues, reflecting a sophisticated approach to disrupting coherent public discourse on a global scale.

As the European Union navigates through this critical period marked by geopolitical shifts, internal challenges, and external manipulations, the role of digital platforms in shaping political narratives becomes increasingly evident. The orchestrated activities of troll groups, coupled with the strategic dissemination of narratives that echo state-affiliated media from adversarial nations, highlight a complex web of influence aimed at destabilizing public discourse and swaying electoral outcomes. This environment, ripe with disinformation and polarized ideologies, underscores the urgent need for robust mechanisms to safeguard the integrity of the democratic process. As EU citizens approach a pivotal election, setting up the AI-powered mechanism with trustworthy partners in order to establish the collective resilience of the European community against disruptions will be crucial in steering the continent towards a future that reflects its democratic values and principles, ensuring that the voice of the electorate prevails amidst the cacophony of digital warfare.


As the European Union approaches the June 2024 European Parliament elections, it faces a complex and rapidly changing global environment. These elections come in the wake of significant global events, including the ongoing conflict in Ukraine, the aftermath of the COVID-19 pandemic, the completion of Brexit, and the anticipation of the upcoming U.S. presidential election, potentially featuring Donald Trump as a candidate. The political landscape within the EU is experiencing a noticeable shift toward right-wing ideologies, challenging traditional political factions and signaling a possible reconfiguration of power in Brussels. The upcoming elections are poised to be a pivotal moment, potentially altering the EU’s strategic direction as the new Commission may chart a different course. While the existing European Commission has introduced significant legislative measures, the forthcoming Commission is poised to chart a novel course, possibly altering the EU’s strategic direction.

Download full report :Observation of Information Manipulations on EU Parliament Elections

Observation of Information Manipulations against TikTok Ban

Executive Summary

The digital realm has witnessed TikTok’s rapid ascension, captivating a global audience with its vibrant content and advanced algorithms. Yet, this rise has been shadowed by significant controversies, especially regarding debates in the United States over a potential TikTok ban, fueled by concerns over national security, data privacy, and the spread of misinformation. This contentious issue has sparked extensive discussions among lawmakers, technology experts, and digital communities, uncovering a complex web of digital manipulation and misinformation. From January 1 to March 25, 2024, an in-depth analysis recorded the involvement of 9,080 troll accounts in these debates, accounting for 11.73% of the total dialogue, thus underscoring the significant impact of troll-driven narratives on shaping public discourse.

This conversation spans three critical incidents: legislative efforts to restrict minors’ access to social media, the Biden campaign’s strategic engagement with TikTok, and debates surrounding the enactment of H.R. 7521. Each of these scenarios has ignited varied online reactions, with a notable share of the conversation being influenced by troll accounts. For instance, the initiative in Florida to limit social media access for minors saw 12.25% of its discussion driven by troll accounts, highlighting debates on the balance between individual rights and governmental authority. The discourse concerning the Biden campaign’s use of TikTok and the legislative debates on H.R. 7521 further delve into issues of free speech, privacy, and governmental oversight, along with critiques of political leadership and representation. Trolls have extended their reach to manipulate discussions on a broad array of events, from global conflicts to international diplomacy these includes: International conflicts within European Union countries, notably Germany, Lithuania, and Sweden; the arrest of a Japanese crime boss involved in an attempt to smuggle nuclear materials to Iran; the Israel-Hamas conflict; the Ukrainian-Russian War; issues pertaining to China’s international diplomacy and geopolitical strategies, each aiming to influence public opinion and policy.

The narrative varies across different platforms—Twitter, YouTube, Weibo, and TikTok—with observed lower manipulation activity on Facebook. Discussions on Twitter often revolve around political dissatisfaction and concerns over privacy and national security, while YouTube critiques focus on U.S. leadership and TikTok’s content moderation practices. Weibo users tend to criticize U.S. policies, portraying them as bullying, whereas TikTok discussions emphasize speech restrictions and systemic critiques. These discussions often serve to challenge authority, question leadership, mobilize youth opposition, and notably, accuse the U.S. of violating First Amendment rights.

A comparison of narratives on American versus Chinese-owned social platforms reveals distinct focuses. Chinese platforms tend to argue that the U.S. approach to banning TikTok differs from that of other Western countries, suggesting that such a ban does not reflect the will of the American people and often pointing out that U.S. companies engage in more surveillance of their citizens than TikTok.

Furthermore, the analysis highlights a deliberate effort by troll accounts to echo narratives promoted by Chinese state-affiliated media, aiming to critique U.S. policies on free speech through the lens of the TikTok ban debate. By aligning with the viewpoints of outlets like Guangming Daily and Takungpao, these accounts play a pivotal role in spreading narratives that accuse the U.S. of hypocrisy regarding free speech and censorship, attempting to sway public opinion in favor of allowing TikTok to operate freely in the U.S. This concerted action underlines the strategic use of digital platforms in the broader geopolitical struggle, emphasizing the power of narrative in shaping the discourse on digital governance and international relations.


In the past few years, the digital arena has seen the explosive growth of TikTok, a platform that has enchanted millions with its compelling content and cutting-edge algorithms. Yet, this surge in popularity is intertwined with significant controversy. Central to the discord is the ongoing debate over the potential prohibition of TikTok in the United States, driven by apprehensions regarding national security, data privacy, and the proliferation of false information. This matter has ignited intense discussions among both policymakers and tech experts, drawing widespread attention across online communities. Amidst this chaos, thorough research has uncovered a complex environment where digital manipulation and misinformation thrive, revealing the intricate challenges at the heart of modern digital discourse.

2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Executive Summary

During the tumultuous period surrounding Taiwan’s presidential election, an extensive research endeavor unveiled a nuanced landscape of digital manipulation and misinformation. At the heart of this investigation lay the pervasive influence of generative technology, which emerged as a potent tool for shaping the informational battleground. Textual misinformation, propelled by the capabilities of generative algorithms, assumed a central role, challenging traditional debunking methodologies and rendering them less effective in the face of evolving manipulation tactics. This shift underscored the pressing need to adapt information verification strategies to contend with the sophisticated mechanisms employed in modern disinformation campaigns.

Amidst the cacophony of digital discourse, a select cadre of troll groups emerged as influential arbiters of media framing, transcending mere mischief to exert substantial sway over public opinion. Contrary to initial assumptions, these groups were revealed to be non-native to Taiwan, operating with remarkable agility across linguistic and cultural boundaries. Their strategic maneuvers, characterized by narrative manipulation and geopolitical intrigue, underscored the transnational nature of contemporary information warfare and its implications for democratic processes. As the election approached, their influence surged, casting a pervasive shadow over the democratic landscape and underscoring the need for heightened vigilance against external interference.

Furthermore, collaborative efforts between mainstream entities and official media channels served to amplify the reach and impact of manipulated narratives. The alignment of these groups with state-supported agendas, particularly evident on Chinese-owned social media platforms, underscored the symbiotic relationship between information manipulation and political influence. Within this complex ecosystem, the emergence of short videos as a prevalent medium for cognitive manipulation added another layer of complexity, blurring the lines between fact and fiction in the digital realm. Thus, within the context of Taiwan’s presidential election, the convergence of technological innovation, geopolitical maneuvering, and media manipulation underscored the multifaceted challenges confronting modern democracies in safeguarding the integrity of public discourse.


In 2024, many countries held elections, and Taiwan emerged as a benchmark for the impact of foreign information operations worldwide. As the first democratic country to conduct elections in 2024 and the pioneer in using AI technology to observe information manipulation during the election. Taiwan AI Labs hopes to share Taiwan’s experiences and lessons in dealing with information manipulation, along with the threats of generative AI, with other democratic nations.

Taiwan AI Labs, by observing and analyzing the coordinated behavior of troll accounts on social media platforms, identifies these accounts and groups them into troll groups. Utilizing large language models and AI, we developed the Infodemic platform to not only reveal the activities of these troll groups but also to delve deeper into the abnormal behaviors behind these accounts.

During this presidential election period, Taiwan AI Labs conducts weekly analyses of online anomalies using the Infodemic platform, inviting scholars and experts to discuss and share insights derived from our analyses. This report compiles the key findings from our organized data.

Download full report :2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2024 Jan W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

This week, we detected a significant number of potentially AI-generated scandalous videos, which were disseminated with the assistance of troll groups. Their aim was tarnishing the image of specific candidates.

Chinese state-affiliated media outlets this week heavily focused on topics such as “Taiwan Strait crisis” and the “high-end vaccine scandal.” troll groups followed suit, amplifying discussions related to these issues in public opinion.

In this analysis, we observed the creation of likely fake Facebook pages that initially attract the general public with video content before shifting to sharing political topics to influence readers.

Some accounts simultaneously operated on domestic and international events, with discourse highly resembling Chinese state-affiliated media (similarity scores of 42.6% and 37.2%). They actively participated in the current presidential election through Facebook groups #61009 and #61019. This week, their activities were less focused on the Ko Wen-je fan page and were primarily centered around specific candidates and the incumbent president’s fan pages, with a discourse primarily aimed at attacking a particular political party.

In the National Defense Ministry’s national-level alert event, fake accounts systematically shared information within social groups to promote the government and the ruling party in the upcoming elections.

Download full report :2024 Jan W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec W4 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

This week, AI Labs focused on troll operations and historical behavior on Facebook, organizing related information about these two groups on the Infodemic website as supplementary material.

From September to November 2023, China prominently used war threats against Taiwan, accusing the Taiwanese government of pushing the island toward the brink of war. In December, with the election approaching, the tone of war threats decreased, and China shifted to emphasize educational and economic issues, focusing on “the impact of ECFA’s termination on Taiwan’s economy” and shifting from “the wave of university closures in Taiwan” to “De-Sinicization of Taiwan’s curriculum.”

AI Labs analyzed troll groups from September to December, echoing PRC state-affiliated media to the greatest extent, and found that Facebook #61009 (42.6%) and Facebook #61019 (37.2%) had the highest resonance.

As the election approached, Facebook #61009’s narratives closely aligned with official media, focusing on war threats against Taiwan and attacking education and economic issues. Domestically, they mainly targeted Tsai Ing-wen as a ‘fake Ph.D.’; internationally, they criticized U.S. domestic issues in English, branding Biden as a dictator. They predominantly used livelihood issues as their attack strategy in both domestic and international operations.

Compared to Facebook #61009, Facebook #61019 more frequently used abusive language and continuously flooded specific content under candidates’ posts, influencing discussion content.

AI Labs analyzed troll groups from September to December that most closely echoed PRC state-affiliated media narratives, finding that Facebook #61009 (42.6%) and Facebook #61019 (37.2%) had the highest degree of resonance. This week, both groups became the top two in terms of operational volume in the New Taipei middle school student throat-slitting case, linking the incident to Tsai Ing-wen’s government and the support for abolishing the death penalty in Taiwan, thereby strengthening the image of Tsai’s government.

This week, the top volume of troll group on PTT, PTT #60001 (8.9%), used the Distort tactic to spread narratives linking the DPP with the Chinese Communist Party, Lai Ching-te’s lies, and spreading misinformation, influencing people’s perception of the facts.

This week on the TikTok platform, troll groups supporting the KMT, TikTok #74075 (75%) and TikTok #74023 (25%), intensified their efforts, participating in various issues related to the KMT, including Hou Yu-ih’s Kai Xuan Yuan controversy and Jaw Shaw-kong’s slip of the tongue. For the aforementioned issues detrimental to the Blue Camp, they used the Distract tactic, repeatedly commenting “KMT governance brings peace and security to the people, voting for all KMT candidates” to dominate related topic pages and shift the public’s focus. This action aligns with the main theme of this week’s PRC state-affiliated media, “KMT governance will improve the economy,” and echoes the operations observed on TikTok and YouTube.

This week, the controversy over Lai Ching-te’s illegal construction of his family home continued from last week’s operations on various platforms, with PRC state-affiliated media also echoing related narratives. However, after the New Taipei student throat-slitting case on the 25th, PTT and Facebook saw related narratives attacking the ruling party on the 27th, followed by PRC state-affiliated media echoing the operation of this event on the 28th.

This week’s additional main theme of PRC state-affiliated media continued last week’s operations related to ECFA, followed by focusing on the topic of grouper fish imports under the theme “KMT governance will improve the economy,” and observed echoing operations on TikTok and YouTube.

On PTT, the most operational volume on the 23rd and 24th of December was observed to be attacks on Jaw Shaw-kong and Cynthia Wu’s slips of the tongue.

Download full report :2023 Dec W4 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec W3 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

From September to November 2023, PRC state-affiliated media’s primary narrative involved threatening Taiwan with war, accusing the Taiwanese government of pushing the island towards the brink of war. In December, as the election approached, the tone of war threats diminished as the election approached. China shifted its focus to emphasize Taiwan’s educational system and economic issues, particularly highlighting the “impact of the termination of ECFA on Taiwan’s economy” and “the wave of university closures in Taiwan” to the “De-Sinicization of Taiwan’s curriculum design” as key education-related issues.

AI Labs’ analysis from September to December identified troll groups most closely echoing PRC state-affiliated media narratives, with Facebook #61009 (42.6%) and Facebook #61019 (37.2%) showing the highest level of resonance.

As the election drew closer, the narrative trends of Facebook #61009 closely aligned with China’s official media, focusing on war threats against Taiwan and primarily attacking educational and economic issues. Domestically, the group mainly targeted Tsai Ing-wen, labeling her as a ‘fake Ph.D.’; internationally, they criticized U.S. domestic issues in English, branding Biden as a dictator. Both domestic and international operations prominently used livelihood issues as their primary attack strategy.

Since September, Facebook #61019 has mirrored official media trends in the narrative of “The U.S. disregards the life and death of Taiwanese people,” with recent narratives also including war threats and economic issues. Domestically, the group focused on Tsai Ing-wen’s thesis controversy, while internationally, they attacked U.S. foreign policy failures in English, claiming a stronger voice in Taiwan favoring unification.

Regarding the coordinating behavior of this week, PRC state-affiliated media continued last week’s narrative and theme, focusing on “DPP’s election will lead to military danger” and “Termination of ECFA impacts Taiwan’s economy.” Following China’s announcement on December 21st terminating 12 ECFA tariff preferences, there was a surge in operations on PTT and YouTube.

Throughout the week, platforms in Taiwan featured narratives attacking the illegal construction at Lai Ching-te’s family home, with official media also discussing this from December 19th to 21st. Analyzing related narratives across platforms, PTT and Facebook had the highest activity levels. The groups most closely echoing official media, Facebook #61019 (33.8%) and Facebook #61009 (26.6%), were also the most active, employing tactics like repetitive comments using phrases such as “Lai Pi Liao (賴皮寮)” and “Refusing to demolish (賴著不拆)” to manipulate the discussion.

Download full report :2023 Dec W3 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec Week2 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

This week, PRC state-affiliated media has 2 major stories, “DPP’s election will lead to military danger” and “De-Sinicization of Taiwan’s curriculum”. Both are echoed by the troll groups.  The story of “DPP’s election will lead to military danger” is echoed by troll groups on Facebook. At the same time, the story of “De-Sinicization of Taiwan’s curriculum” was manipulated by PTT and TikTok troll groups.

The aggregated top stories from PRC state-affiliated media starting from October till this week were “Taiwan pushed to the boundary of military conflict” (23%), “The U.S. disregards the death of Taiwanese people” (15.6%), and “Termination of ECFA sacrifices Taiwan’s economy” (13%). The story of “Taiwan pushed towards the brink of military conflict” is decreasing, while the “Termination of ECFA sacrifices Taiwan’s economy” is growing.

In TikTok’s troll operations, the second-ranked group, TikTok #74046 (11%), underwent one account suspension and two shifts in public opinion strategy within five months. In early July, when the KMT presidential candidate Hou Yu-ih’s support was waning, the group backed the Gou-Han pairing, aiming to influence the KMT’s decision. By the end of November, as the KMT-TPP collaboration was nearing collapse, the group shifted its support to Ko Wen-je.

AI Labs analyzed comments from troll groups on YouTube from October 1st to December 10th and found that YouTube #71012 (11%), YouTube #71319 (7%), and YouTube #71341 (6%) are the most active troll groups on the platform. Among these, YouTube #71012 primarily attacks Ko Wen-je (48.1%) and supports Lai Ching-te (11.5%), YouTube #71319 mainly attacks Ko Wen-je (45.3%) and supports Hou Yu-ih (6.7%), while YouTube #71341 focuses on attacking Tsai Ing-wen (32%) and the DPP (9.9%). These groups primarily use repetitive commenting to steer discussion trends on media channels or to boost interaction on influencer channels, thereby influencing the algorithm.

Download Full Report:2023 Dec W2 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

After the failure of the KMT-TPP collaboration, the top active troll account groups (PTT #60011, Facebook #61009 and Facebook #61019) intensified solely attacking Democratic Progressive Party. Before the failure of KMT-TPP collaboration, those groups also attacked the KMT.

At the same period accounting for 45.8% of all (PRC) state affiliated media news, these media top narrative promoted the “Choice Between Peace and War” concept. Within this, 30% of the themes involved misleading narratives distorted from articles published by American scholar Bonnie Glaser. The related news articles are quickly effectively distributed by Facebook troll groups.

An analysis of collaborated behavior, the researchers demonstrated that from November 1st to December 10th, the ranking highest troll activities were that Tsai Ing-wen’s Facebook fan group (34.3%), followed by Lai Ching-te (8.5%), Ko Wen-je (2,77%), Terry Gou (2.76%), Hou You-yi (2.50%). Further, two Facebook troll account groups #61009 and #61019 contributed to over 50% of total troll activities under all on fan pages. These two troll groups also actively distributed “South China Sea Working Conference” misinformation attacking the Taiwan military expenses and discrediting United States’ support in July.

Download Full Report:2023 Dec W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Nov W4 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on manipulation strategies

This week, following the collapse of the KMT-TPP alliance, there was a noticeable surge in mutual criticisms between the KMT and TPP across various platforms, with the attack intensity recorded as follows: Facebook at 12.9%, YouTube at 23.7%, PTT at 26.7%, and TikTok at 26.7%. Notably, on YouTube, criticisms were more heavily directed at the KMT than TPP, whereas on other platforms, Ko Wen-je was the primary target.

An analysis of the two most active Facebook troll groups, 61009 and 61019, revealed similar patterns in their active periods and targets of criticism. Both troll groups suddenly became active on September 6th, the day candidate Terry Gou announced his running for election. They are active in similar stories and predominantly critiqued the KMT, the DPP, and Ko Wen-je, accounting for an average of 15% of their content, with relatively lesser focus on Terry Gou.

Throughout this week, Tsai Ing-wen’s Facebook fan page became a hotspot for a large number of coordinated comments. Some of these comments echoed the China state-affiliated media’s narrative, suggesting that the DPP’s ascension to power could escalate military tensions and conflict risks. Simultaneously, narratives favoring the KMT appeared on PTT, YouTube, and TikTok, which subsequently found resonance in the narratives pushed by Chinese state-affiliated media.


Download Full Report:2023 Nov W4 -2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Analysis of cognitive warfare and information manipulation in the Israel-Hamas war 2023


Cognitive warfare, information warfare, and disinformation are increasingly being used to undermine democratic societies. These tactics can sow discord, weaken public trust in institutions, and influence public opinion. This article explores the cognitive warfare and information manipulation of the Israel-Hamas war in 2023 with AI technology, and empirical evidence indicates that these approaches were employed even before the conflict.

Taiwan AI Labs is the first open AI research institute in Asia focused on Trustworthy AI computing, responsible AI methodologies, and delivering right-respecting solutions. Our team with a keen interest in the intersection of social media, artificial intelligence, and cognitive warfare, we are acutely aware of the profound impact of these domains on democratic societies. In this context, our objective is to provide an in-depth analysis of how cognitive warfare, information warfare, and the propagation of misinformation affect the fabric of democratic societies. For example, our team found some disinformation about the Israel-Hamas conflict in Titok in early June. 

This article used the Israel-Hamas war in 2023 to shed light on the use of artificial intelligence in unveiling the intricate web of influence campaigns that precede and accompany armed conflicts. This study investigates how various state and non-state actors strategically disseminate cognitive warfare and information operations in the digital sphere before and during conflicts. The analysis delves into the manipulation of suspicious accounts across diverse social media platforms and the involvement of foreign entities in shaping the information landscape.

This research’s findings are insightful and carry significant implications for national security, counter-terrorism efforts, and cognitive warfare strategies. Understanding the dynamics of cognitive and information warfare is paramount in countering external influences and safeguarding the integrity of democratic processes. This study serves as a foundation for future endeavors to develop observation indicators for counter-terrorism and cognitive warfare, ultimately contributing to the preservation of democratic values and the well-being of societies.

In a world where information is a potent weapon, this research endeavors to unveil the intricate strategies at play in the digital realm, shedding light on the dynamics that can potentially shape the future of democratic societies.  For example, an authoritarian regime, such as China is using generative AI to manipulate public affairs globally, especially in democratic societies [1,2]. With this foundation, we embark on a journey to comprehend, analyze, and counter the evolving landscape of cognitive warfare and information operations.


Taiwan AI Labs uses our analytical software “Infodemic” for investigating information operations toward multiple social media platforms. The detailed algorithm information is below

Analytics data coverage and building similarity nodes between user accounts

1. Analytics data

In the scope of this research, we conducted a comprehensive analysis of 71,774 dubious user accounts sourced from a diverse array of social media platforms, including YouTube, X (Twitter), and the most significant online forum in Taiwan, PTT. We organized these suspicious accounts into 9,737 distinct coordinated groups using an advanced user clustering algorithm. 

The central aim of this scholarly investigation was to delve into the strategies employed by these coordinated groups in the realm of information manipulation, with a particular focus on their activities related to the Israel-Hamas war in 2023. Our analytical efforts encompassed the scrutiny of 6,942 news articles, 63,264 social media posts, and 942,205 comments published between August 10–October 10, 2023.


2. Analysis Pipeline

Figure 1: An overview of the coordinated behavior analysis pipeline

Figure 1 illustrates the analysis pipeline of this report, consisting of three components:

  • User Features Construction: We analyze and quantify the behavioral characteristics of a given user and transform the user features as user vectors.
  • User Clustering: Leveraging the user vectors, we build a network of related users and apply a community detection algorithm to identify highly correlated user groups,and categorize them as collaborative entities for further analysis.
  • Group Analysis: We delve into the operational strategies of these collaborative groups, examining aspects such as the topics they engage with, their operation methods, and their tendencies to support or oppose specific entities.

In the subsequent sections, we will provide detailed explanations of each of these components.


User Feature Construction

To capture user information on social forums effectively, we propose two feature sets:

  • User Behaviour Features
    Data preparation for user behavior features is a critical step in extracting meaningful insights from the given dataset, which encompasses a wide range of information related to social posts (or videos) and user interactions.
    We collected a broad spectrum of raw social data, which was then transformed into a series of columns representing user behavior features, such as the ‘destination of user interactions’ (post_id or video_id), the ‘time of user actions’, and the ‘domain of shared links by users’, and so forth. These user behavior features will be further transformed and organized for use in user similarity evaluation and clustering.
  • Co-occurrence Features
    The purpose of co-occurrence features is to identify users who frequently engage with the same topics or respond to the same articles. We employ Non-Negative Matrix Factorization (NMF) to quantify co-occurrence features among users.
    NMF is a mathematical technique used for data analysis and dimensionality reduction by decomposing a given matrix into two or more matrices in a way that all elements in these matrices are non-negative.
    Specifically, to construct the features for M users and N posts, we build an M * N dimensional relationship matrix to record each user’s engagement with various posts. Subsequently, we apply NMF to decompose this matrix, and we utilize the obtained user vectors as co-occurrence features for each user.

User Clustering

  • User Similarity Evaluation
    After completing the construction of user features, our next step involves evaluating the coordinated relationships between users. For behavioral features, we compare various behaviors between user pairs and normalize the comparison result to a range between 0 and 1. For instance, concerning user activity times, we record the activity times for each user within a week as a 7×24-dimensional matrix. We then calculate the cosine similarity between pairs of users based on their activity times. In the case of co-occurrence features, we use cosine similarity to assess the similarity between co-occurring vectors of users.
    By computing the cosine of the inclination between these vectors, we can deduce the level of similarity between user response or their actions. This tactic is notably useful in the study of social media, where it permits the collection of users according to common behavior patterns.[3] Users who have analogous cosine similarity demonstrate a highly coordinated pattern of behavior.
  • User Clustering
    After constructing pairwise similarities among users based on their respective features, for user pairs with a similarity exceeding a predefined threshold, we establish an edge between them, thereby creating a user network. Subsequently, we applied Infomap for clustering of this network. Infomap is an algorithm for identifying community structures within networks using the information flow. The detected community in this network would be considered as coordinated groups, in the following sections.


Group Analysis

  • Opinion Clustering
    To efficiently understand the narratives proposed by each user group, we employed a text-cluster grouping technique on comments posted by coordinated groups. Specifically, we leveraged a pre-trained text encoder to convert each comment into vectors and applied a hierarchical clustering algorithm to cluster relevant posts into the same group, which would be used in the following analysis.
  • Stance Detection and Narrative Summary
    Large Pretrained Language Models have demonstrated its utility in extracting entities mentioned within textual content while simultaneously providing relevant explanations. [5] This capability contributes to the comprehension of pivotal elements within the discourse, particularly in understanding how comments and evaluations impact these identified entities.
    In this report, we use Taiwan LLaMa for text analysis. Taiwan LLaMa is a large language model pre-trained on native Taiwanese language corpus. After evaluation, it has shown remarkable proficiency in understanding Traditional Chinese, and it excels in identifying and comprehending Taiwan-related topics and entities. To be more specific, we leverage Taiwan LLaMa to extract vital topics, entities, and organizational names from each comment. Furthermore, we request it to assess the comment author’s stance on these entities, categorizing them as either positive, neutral, or negative. This process is applied to all opinion clusters.
    Finally, we would calculate the percentage of each main topic/entity mentioned in  the opinion group, the percentage of positive/negative sentiment associated with each topic/entity, and generate summaries for each opinion cluster using LLM for facilitating data analysts in grasping the overall picture of the event efficiently.


In this incident, we discovered that there’s a different concentration of information manipulation patterns from troll groups before and after the Hamas attack on October 7th:

The manipulation pattern before the Hamas attack

Tracing back to August 24th, there’s an event where Israel’s Far-Right national security Minister, Itamar Ben-Gvir stated that Israeli rights trump Palestinian freedom of movement. Ben-Gvir has admitted that his right to move around unimpeded is superior to the freedom of movement for Palestinians in the occupied West Bank, sparking outrage.

Figure 2: A beeswarm plot showing the timeline of the stories after ‘Ben-Gvir says Israeli rights trump Palestinian freedom of movement.’

* Each Circle Represents an Event related to this manipulated story
** The Size of each circle is defined by the sum of the social discussion of that Event
*** The Darker the circle is, the Higher the proportion of troll comments in the Event


This study clustered the manipulated comments from the coordinated troll groups into narratives on the story events above. It was then discovered that Israel was the most manipulated entity, being swayed into a negative light, with accusations pointing towards Israel as the culprit behind the tragedy in the West Bank. On the other hand, as the USA funded and supported Israel, coordinated troll groups also accused the USA of encouraging Israel’s apartheid policy.

Table 1: Aimed entities and summary of narratives manipulated by coordinated troll groups in August.

On September 19th, Israeli Prime Minister Benjamin Netanyahu arrived in the U.S. for a meeting with President Joe Biden amid unprecedented demonstrations in Israel against a planned overhaul of Israel’s judicial system. Also on the agenda is a possible U.S.-brokered deal for normalization between Israel and Saudi Arabia. This study also discovered a series of manipulations from troll groups.

Figure 3: A beeswarm plot showing the timeline of the stories after ‘Netanyahu Prepares for Much-Anticipated Meeting With Biden.’

* Each Circle Represents an Event related to this manipulated story
** The Size of each circle is defined by the sum of the social discussion of that Event
*** The Darker the circle is, the Higher the proportion of troll comments in the Event

In these events, the study identified two contrasting manipulation methods: one involving criticism of Israel and Joe Biden, and the other showing support for Israel, reflecting the overwhelming American support for the country. Regarding the majority of manipulations, coordinated troll groups primarily focused on accusing Israel of human rights abuses and operating an apartheid regime in the Israeli-Palestinian conflict. These groups also criticized Joe Biden for handling the incident, attributing it to administrative incompetence and causing a domestic economic downturn.

Table 2: Aimed entities and summary of narratives manipulated by coordinated troll groups in September

Contrary to the Hamas attack on October 7th, this study revealed that coordinated troll groups manipulated and blamed Israel for its apartheid policies and encroachment on the West Bank in August and September. This manipulation may further justify the fact that Israel is currently under attack, viewing it through the lens of information manipulation by these coordinated troll groups. Information manipulation could be a leading indicator for future conflicts.

These evidences revealed that coordinated troll groups manipulated and blamed Israel for its apartheid policies and encroachment on the West Bank in August and September. This manipulation may further justify the fact that Hamas attacked Israel on October 7th. Information manipulation by coordinated troll groups could be a leading indicator for signs of incoming conflicts in the future.


The manipulation pattern after the Hamas attack

AI Labs consolidates similar news articles into events using AI technology and visually represents these events on a timeline using a beeswarm plot. Within this representation, each circle signifies the social volume of an event. The color indicates the percentage of troll activity, with darker shades signifying higher levels of coordinated actions.

Following the incident on October 7, the event timeline is depicted in the figure below. Notably, after Hamas’ attack on Israel, there was a significant spike in activity, with varying degrees of coordinated actions across numerous events.

Figure 4: A beeswarm plot showing the timeline of the story events of Hamas attack

* Each Circle Represents an Event related to this manipulated story

** The Size of each circle is defined by the sum of the social discussion of that Event

*** The Darker the circle is, the Higher the proportion of troll comments in the Event

AI Labs categorizes these events based on different social media platforms, with the analysis as follows:


Case study 1: YouTube

From the data AI Labs collected on YouTube, there are 497 videos with 175,072 comments. Among these, 681 comments are identified as coming from troll accounts, accounting for approximately 0.389% of the total comments. There are 64 distinct troll groups involved in these operations.

The timeline for activity on the YouTube platform is as follows. The most manipulated story was “Israel’s father-to-be was called up! Before going to the battlefield, ‘the pregnant wife hides her face and cries,’ the sad picture exposed.” For this event, the level of troll activity on the YouTube platform was 15.79%.

Figure 5: A beeswarm plot showing the timeline of the most manipulated story on YouTube.

* Each Circle Represents an Event related to this manipulated story

** The Size of each circle is defined by the sum of the social discussion of that Event

*** The Darker the circle is, the Higher the proportion of troll comments in the Event


Utilizing advanced AI analytics, AI Labs conducted an in-depth examination of the actions taken by certain coordinated online groups. The analysis revealed that the primary focuses of these operations encompass subjects like Israel, Hamas, and Palestine. Specifically:

14% of the troll narratives aim to attack Israel, encompassing criticisms related to its historical transgressions against the Palestinian populace. Within this discourse, there is also a prevalent assertion that Western nations display a marked hypocrisy by turning a blind eye to Israel’s endeavors in Palestine. Further, there’s an emergent speculation insinuating Israel’s clandestine endeavors to instigate U.S. aggression towards Iran.

6.5% of the troll narratives direct attention to attacking Hamas. The predominant narratives label Hamas’s undertakings as acts of terrorism, accompanied by forewarnings of potential robust retaliations. Additionally, these narratives implicate Hamas in destabilizing regional peace, with indirect allusions to Iran as the potential puppeteer behind the scenes. 

5.8% of the troll narratives demonstrate a clear pro-Palestine stance. Within this segment, there’s a prevalent contention that suggests that narratives supporting Israel are double standard. Moreover, there’s an emerging narrative painting the conflict as a collaborative false flag orchestrated by Israel and the U.S. It’s noteworthy to mention that the slogan “Free Palestine” frequently punctuates these expressions.

Table 3: Aimed entities and summary of narratives manipulated by coordinated troll groups on YouTube

In addition to the aforementioned, AI Labs utilized sophisticated clustering algorithms to discern predominant narratives emanating from these troll accounts. Preliminary findings indicate that on the YouTube platform, 10.3% of these narratives uphold the sentiment that “Hamas and Palestinians who endorse Hamas are categorically terrorists.” Meanwhile, 4.5% overtly endorse Israel’s tactical responses against Gaza. Of significant interest is the 1.9% narrative subset suggesting that “Hamas’s armaments are sourced from Ukraine”—a narrative intriguingly resonant with positions articulated by Chinese state-affiliated media outlets.

Table 4: Percentage of narratives and aim entities manipulated by coordinated troll groups on YouTube.

Case study 2: X(Twitter):

AI Labs curated a dataset comprising 8,650 tweets and 61,568 associated replies. Within this corpus, replies instigated by the troll groups totaled 295, constituting approximately 0.479% of the overall commentary volume. In total, there were 34 distinguishable troll groups actively operational.

The timeline derived from the Twitter platform is presented subsequently. The events subjected to the most pronounced troll activities were “CUNY Law’s decision to suspend student commencement speakers following an anti-Israel debacle” and “Democrats cautioning Biden concerning the Saudi-Israel accord,” accounting for 9.09% and 6.50% of troll operations, respectively.

Figure 6: A beeswarm plot showing the timeline of the story events of Hamas attack on X(Twitter).

* Each Circle Represents a Event related to this manipulated story
** The Size of each circle defined by the sum of the social discussion of that Event
*** The Darker the circle is, the Higher the proportion of troll comments in the Event


In a comprehensive analysis conducted by AI Labs, troll groups on X(Twitter) were observed to predominantly focus their efforts on Israel, Biden, Hamas, and Iran. The narrative against Israel accounted for 20.3% of the total discourse, emphasizing the following:

20.3% of the troll narratives against Israel emphasizing that Israel has systematically perpetrated what can be characterized as genocide against the Palestinians, rendering it undeserving of financial backing from the U.S.

10.76% of the troll narratives attributed to attack Joe Biden, about that Biden’s allocation of $6 billion to Iran reflects fiscal and strategic imprudence, further suggesting that this policy facilitates Iran’s provision of arms to Hamas.

10.76% of the troll narratives(理由同前) attack Hamas, with the core narrative characterizing Hamas’ actions against Israel as acts of terrorism. Critiques against Iran made up 7.6%, predominantly accusing it of being a principal arms supplier to Hamas.

Table 5: Aimed entities and summary of narratives manipulated by coordinated troll groups on X(Twitter)


Leveraging clustering methodologies, AI Labs identified dominant narratives among these troll groups on X(Twitter). Approximately 11.5% of troll replies focused on the theme “Biden’s Financial Backing to Iran”, overtly criticizing Biden’s foreign policy decisions. An additional 7.3% of replies contained links underscoring the alleged disparity in casualty rates between the Gaza Strip and Israel. Moreover, 5.2% of the troll replies appeared to advocate for Trump’s Middle Eastern policies, positioning them as judicious and effective compared to current strategies.

Table 6: Percentage of narratives and aim entities manipulated by coordinated troll groups onX(Twitter).

Case study 3: PTT

PTT is a renowned terminal-based bulletin board system (BBS) in Taiwan. Our team extracted data from 312 pertinent posts and 62,308 comments on PTT. Of these comments, those attributed to coordinated groups totaled 3,613, representing 5.8% of the overall comment volume. In total, there were 110 troll groups actively participating in discussions.

The temporal analysis on PTT reveals four major incidents with significant user engagement and heightened levels of coordinated activity. These are:

“Schumer meets Wang Yi, urges Mainland China to bolster Israel amidst the Israel-Palestine clash,” with troll activity accounting for 11.06%.

“Over 150 individuals abducted! Hamas spokesperson: Without Israel’s ceasefire, hostage negotiations remain off the table,” exhibiting a troll activity rate of 11.20%.

“Persistent provocations by Hamas! What deters Israel from seizing Gaza? Experts weigh in,” with a troll activity proportion of 9.97%.

“In the Israel-Palestine Confrontation, the mastermind’s identity behind Hamas’s terrorist attacks is revealed. Having evaded numerous assassination attempts in Israel, he has been physically incapacitated and wheelchair-bound for an extended period,” which saw a troll activity rate of 9.88%.

Figure 7: A beeswarm plot showing the timeline of the story events of Hamas attack on PTT

* Each Circle Represents a Event related to this manipulated story
** The Size of each circle defined by the sum of the social discussion of that Event
*** The Darker the circle is, the Higher the proportion of troll comments in the Event


Utilizing AI, AI Labs delved into the narratives of troll accounts on PTT, identifying their primary targets as Israel, the Taiwanese society, and Hamas. Regarding Israel, PTT trolls discourse bifurcates into supportive and critical stances. Critical narratives against Israel constitute 10.1%, predominantly highlighting criticisms of Israel’s purported military prowess and alleged discrimination against Palestinians. On the other hand, trolls discourse championing Israel represent 8.3%, primarily lauding Israel for its robustness and solidarity. Some voices even suggest that Taiwan could draw lessons from Israel, admiring Israel’s retaliatory methodologies and military discipline.

Given PTT’s status as a localized Taiwanese community platform, it’s unsurprising that discussions also encompass the Taiwanese societal context. A notable 4% of the troll narratives is laden with critiques regarding Taiwan’s perceived lack of unity and skepticism over its capacity to withstand potential Chinese aggression. Meanwhile, 2.5% of troll narratives are atack Hamas, primarily decrying their tactics—like utilizing civilians as bargaining chips—which arguably escalate hostilities.

Table 7: Aimed entities and summary of narratives manipulated by coordinated troll groups on PTT


For narrative categorization, AI Labs employed clustering techniques to classify troll group discussions. Notably, 7.2% of discourse content pointed fingers at Israel, painting it as predominantly belligerent. Additionally, 4% highlighted accusations against Israel for allegedly conducting multiple raids on refugee camps in Jenin earlier that year, while 1.2% of the narrative indicated impending large-scale retaliatory actions by Israel. These threads predominantly hold Israel in a critical light. Conversely, 3.6% of discussions condemned Hamas for allegedly perpetrating terror and indiscriminate civilian harm, whereas 1.4% blamed Iran for purportedly financing Hamas.

Table 8: Percentage of narratives and aim entities manipulated by coordinated troll groups on PTT


Regarding to YouTube, troll entities seem to bolster their engagement and visibility by posting comments that echo narratives familiar to Chinese state-affiliated media, such as claims that “Hamas procures weapons from Ukraine” or that “China can serve as a peace-broker.”

In addition, troll accounts on X(Twitter) appear to leverage the Israel-Palestine conflict as a fulcrum to agitate domestic political discussions, predominantly spotlighting criticisms against Biden’s Iran funding policy.

Figure 8: Troll accounts comment under media tweets to agitate domestic political discussions, predominantly spotlighting criticisms against Biden’s Iran funding policy.


In the case of the PTT, the most pronounced troll narratives seem to portray Israel as inherently aggressive, allegedly implicating them in early-year refugee camp raids. Interestingly, diverging from tactics observed on other platforms, troll entities on PTT chiefly underscore perceived fractures in Taiwanese societal cohesion and purported vulnerabilities against potential Chinese aggression.

Aggregating troll narratives across platforms, we discern that 6% of discussions mock Israel for its purported frequent aggressions against Palestine. Approximately 3.5% advocate for Palestinian statehood, and 3.0% categorize Hamas operations as terroristic. In contrast, 2.2% of discourses support Israel’s proposed actions.

Table 9: Percentage of narratives and aim entities manipulated by coordinated troll groups on all platforms


Chinese state-affiliated media’s narratives and troll operations

AI Labs analyzed news from Chinese state-affiliated media outlets and observed that certain articles experienced extensive re-circulation within China. On October 8th, the Chinese Ministry of Foreign Affairs responded to the incident where Hamas attacked Israel, urging an immediate ceasefire to prevent further escalation of the situation. Later that day, a video titled “The Israel-Palestine War Has Begun! Who is the Hidden Hand Behind the Israel-Palestine Conflict?”(《經緯點評》以巴開戰了!谁是以巴戰爭的幕後黑手?) was released on the YouTube channel operated by David Zheng. This video echoed the sentiments expressed by the Chinese Ministry of Foreign Affairs, suggesting that “China can serve as a mediator between Israel and Palestine, becoming a peacemaker, thus positioning itself to have greater influence in the world.” There was noticeable activity by troll groups in the comments section of this video, aiming to amplify its reach and propagate its viewpoints.

Figure 9: Statement from the Chinese Ministry of Foreign Affairs and the YouTube video echo the narrative.

Figure 9: Statement from the Chinese Ministry of Foreign Affairs and the YouTube video echo the narrative.

Figure 10: Troll accounts comment under the video “The Israel-Palestine War Has Begun! Who is the Hidden Hand Behind the Israel-Palestine Conflict?”(《經緯點評》以巴開戰了!谁是以巴戰爭的幕後黑手?) , aiming to amplify its reach and propagate its viewpoints.

* The comment with background means the commenter is troll account, same color means they are in same group.


On October 10th, the Chinese Global Times cited a news piece from the Russian state-affiliated media Russia Today, which quoted Florian Philippot, the chairman of the French Patriot Party. The article highlighted allegations that weapons supplied by the US to Ukraine had surfaced in the Middle East. It was now used in violent confrontations, with the quantities being “vast!” This claim has been confirmed as false by fact-checking organization NewsGuard Technologies. This narrative also manifested within the discourse of troll groups on YouTube. About 1.9% of the discussions echoed this news item propagated by Chinese state-affiliated media on behalf of Russia. AI Labs postulates that this narrative might be a dissemination effort by China on behalf of Russia, aimed at undermining support for Ukraine from the US and its Western allies.

In early June 2023, Dr. Jung-Chin Shen, an Associate Professor at York University in Canada, observed that numerous users were discussing the Israel-Palestine conflict within the Chinese territory on the platform Douyin. In their discourse, Israel appeared to be consistently retreating, suggesting that Palestine seemed to be on the verge of achieving victory in the war.

Figure 11: In early June 2023, screenshots of videos related to the Israel-Palestine conflict on Douyin. (Source: Jung-Chin Shen’s Facebook)



The findings of this article indiacte that cognitive warfare and information operations are becoming increasingly sophisticated and are being used to achieve a broader range of objectives. The research also indicates that information manipulation could be a pilot indicator for future conflicts.

Specifically, the evidence showed that troll groups manipulated by blaming Israel for their apartheid policy and encroaching on the West Bank in Aug & Sep 2023, right before the conflict began. This suggests that information manipulation was used to sow discord and weaken public trust in the Israeli government, which may have contributed to the outbreak of the conflict.

In addition, the research indicated that cognitive warfare and information operations were used to sow discord and weaken public trust in institutions. These tactics were used to exacerbate tensions between Israelis and Palestinians and to undermine the credibility of the Israeli and Palestinian governments.

These findings have implications for the future of cognitive warfare and information operations. Governments and organizations need to be aware of the threat posed by these tactics and develop strategies to combat them.

This research has implications for the future of cognitive warfare and information operations. The findings suggest that these tactics are becoming more sophisticated and used to achieve a broader range of objectives. The research also suggests that developing new strategies to combat these threats is essential. Future research can investigate how foreign actors, including Iran, Russia, and China, were involved in the conflict. These actors used social media to spread disinformation and propaganda supporting their interests.

This research is still in its early stages, but it has the potential to significantly contribute to our understanding of cognitive warfare and information operations. The findings of this research could be used to inform the development of new strategies to combat these threats and to protect democratic societies from their harmful effects.



[1]Tucker.P.(2023). How China could use generative AI to manipulate the globe on Taiwan. Defense One. https://www.defenseone.com/technology/2023/09/how-china-could-use-generative-ai-manipulate-globe-taiwan/390147/

[2] Beauchame-Mustafaga & Macllino.(2023). The U.S. Isn’t Ready for the New Age of AI-Fueled Disinformation—But China Is. Time. https://time.com/6320638/ai-disinformation-china/

[3] Al-Otaibi, S., Altwoijry, N., Alqahtani, A., Aldheem, L., Alqhatani, M., Alsuraiby, N., & Albarrak, S. (2022). Cosine similarity-based algorithm for social networking recommendation. Int. J. Electr. Comput. Eng, 12(2), 1881-1892.

[4] Lee, D. D., & Seung, H. S. (2001). Algorithms for non-negative matrix factorization. In Advances in neural information processing systems (Vol. 13, pp. 556-562).

[5] Covas, E. (2023). Named entity recognition using GPT for identifying comparable companies. arXiv preprint arXiv:2307.07420.