One full year observation of Information Manipulations Targeting 2024 EU Parliament Elections

Executive Summary

From July 1, 2023, to June 24, 2024, Taiwan AI Labs observed a total of 26,011 events and 335,045 media reports. Within this dataset, 97 battlefields and 1,624 media reports were specifically related to the EU Parliament Election. Out of 2,752,681 identified troll accounts in the total dataset, 20,041 were active in EU election-related discussions. The overall community volume was 55,307,335, of which 268,080 were associated with the EU election topic.

We observed social media troll activities targeting the European Parliament elections between July 2023 and June 2024, detailed analysis has revealed 20,041 troll accounts in related discussions, contributing to 12.58% of the total EU Parliament Election discussions. At the same period, 1,624 instances of major stories, with 16.07% of these instances associated with state-affiliated media from China and Russia.

The digital discourses were influenced by orchestrated campaigns from troll groups, highlighting two primary developments: the purported rise of far-right movements within Europe and initiatives to counteract interference on social media by Russia, large tech firms, and other external actors. These troll-driven narratives foster skepticism towards the threat of extremism and promote endorsement of conservative ideologies, shifting public perception. The report indicates that 11.76% of comments criticize media reporting, accusing them of lacking judgment and exaggerating Russian propaganda’s influence. At the same time, the trolls distorted the impending rollout of EU regulations which aimed at curbing disinformation on social media platforms by distributing the concerns over government censorship impacting the freedom of speech. The trolls at the same time attacked the authority of mainstream media, by expressing disappointment at how news media unable to treat Russian propaganda, destroying the trust of reputable media sources.

Troll groups’ narratives, reinforcing Russian and Chinese state-affiliated media, span concerns challenging EU’s leaderships. Delivered in an approach with a special focus on the continent’s declining living standards and extending to topics beyond the immediate scope of the European Parliament elections. These narratives targeted the following 5 major topics: Russia Ukraine war, energy security, digital regulation, migration, and climate change. The topics and strategies consistent with Russia state-affiliated media demonstrated a concerted effort to shape public discourses.

According to coordinated troll activity observed over bi-monthly periods, this report shows the top troll participated stories. These coordinated efforts manifest in several key areas: Attacks on EU’s support of Ukraine intensified, with troll participation volume surging from 30.5% (July-August 2023) to 34.97% (September-October 2023), exploiting internal divisions. Economic issues, particularly energy security, contributed to 22.04% of troll narratives (September-October 2023) continues to be one of largest battlefields. Troll activities have also played a significant role in amplifying far-right voices, reaching 31.34% in November-December 2023. Compared to Taiwan we observed similar strategies, military expenses and energy security are also among the top stories undermining the government’s trust and allies’ support. At the same time, troll activities in Taiwan also play an important role in distributing the discourses which could support the white party.

After recognizing the narrative strategies, we selected troll groups spanning multiple narratives and identified the top five narratives by troll volume for each group for a detailed analysis. Having identified the Russia-Ukraine war and EU internal disputes as key themes in our narrative strategy analysis, we found two significant groups: one focusing on EU issues and US politics, and another concentrating on the Ukraine-Russia war and US politics. These groups showed an unusual pattern of simultaneously focusing on multiple geopolitical issues across different regions, suggesting coordinated foreign influence attempts.

Two influential troll groups both have distinct strategies to shape global narratives. The first group focuses on Eastern European geopolitics and international military affairs, extensively disseminating stories about the Russia-Ukraine conflict and NATO activities. This group curated narratives from military confrontations to Yevgeny Prigozhin’s assassination, aiming to influence public perception of regional security dynamics. At the same period of time, this group also manipulated Trump indictments. The second group, in contrast, dedicated its efforts towards the internal politics of Western democracies, particularly the UK and the US. This group spreaded negative narratives about political leaders, criticized government decisions, and closely monitored controversial political figures, seemingly intended on swaying voter sentiment and eroding public confidence in democratic institutions.

The strategies employed by these troll groups reveal a landscape of information warfare that is sophisticated and extends beyond traditional propaganda. Unlike more overt disinformation campaigns seen in recent elections or during major global events, these groups operate with subtlety and persistence, weaving their narratives into the fabric of everyday online discourse. Their focus on both international conflicts and domestic politics in Western democracies suggests a nuanced understanding of global power dynamics and the interconnectedness of national and international issues. This approach stands in stark contrast to single-issue disinformation campaigns, such as those targeting specific environmental policies or economic measures. By maintaining a diverse portfolio of narratives across multiple countries and topics, these groups create a pervasive atmosphere of uncertainty and skepticism, potentially influencing public opinion on a scale that isolated, topic-specific campaigns cannot achieve. This comprehensive strategy highlights the evolving nature of information manipulation, posing significant challenges to the integrity of public discourse and the stability of democratic processes worldwide.

As the European Union navigates through this critical period marked by geopolitical shifts, internal challenges, and external manipulations, the role of digital platforms in shaping political narratives becomes evident. The orchestrated activities of troll groups, coupled with the strategic dissemination of narratives that echo state-affiliated media from adversarial nations, highlight a complex web of influence aimed at destabilizing public discourse and swaying electoral outcomes. This environment, ripe with disinformation and polarized ideologies, underscores the urgent need for robust mechanisms to safeguard the integrity of the democratic process.

As EU citizens approach a pivotal election, setting up the AI-powered mechanism with trustworthy partners to establish the collective resilience of the European community against disruptions will be crucial in steering the continent towards a future that reflects its democratic values and principles, ensuring that the voice of the electorate prevails amidst the cacophony of digital warfare.

Download full report :One full year observation of Information Manipulations Targeting 2024 EU Parliament Elections

Observation of Information Manipulations on EU Parliament Elections

Executive Summary

From the start of the year until early April 2024, detailed analysis has revealed the engagement of 16,902 troll accounts in related discussions, making up 12.10% of the total conversation. Media analysis during this period also detected 1,365 instances of media engagement, with 4.84% of these instances (66 mentions) linked to state-affiliated media from China and Russia.

In the lead-up to the European Parliament elections, the digital discourse is heavily influenced by orchestrated campaigns from troll groups, highlighting two primary developments: the purported rise of far-right movements within Europe and initiatives to counteract interference on social media by Russia, large tech firms, and other external actors. These troll-driven narratives foster a deliberate skepticism towards the threat of extremism and promote a robust endorsement of conservative ideologies, skewing public perception. Moreover, they amplify concerns over government censorship, question the integrity of mainstream media, and feed disillusionment with traditional outlets’ handling of topics like Russian propaganda, thereby sowing a deeper mistrust in the media as fair conveyors of information. The impending rollout of EU regulations aimed at curbing disinformation on platforms such as X, TikTok, and Facebook is presented by these trolls as a contentious issue, complicating efforts to balance election security with the maintenance of free speech rights.

Troll groups’ narratives, reflecting those in Russian and Chinese state-affiliated media, span a wide array of concerns impacting Europe, with a special focus on the continent’s declining living standards and extending to topics beyond the immediate scope of the European Parliament elections. These narratives delve into issues like Russia’s War in Ukraine, energy security, digital regulation, migration, and climate change, showcasing a thematic coherence that transcends borders and highlights key geopolitical and socio-economic challenges. This alignment underscores a concerted effort to shape public discourse around critical issues affecting Europe, revealing a strategic intersection of interests among troll groups and state media from Russia and China, particularly on matters such as energy security and geopolitical conflicts.

The narrative landscape of online trolling has been significantly influenced by two predominant troll groups, each advocating distinct agendas on platforms, Twitter and YouTube. On Twitter, one group has launched intense criticism against Joe Biden, scrutinizing his mental health and policy decisions, and has targeted Canadian Prime Minister Trudeau with accusations of economic mismanagement and media bias, casting Canada unfavorably. Conversely, on YouTube, the narrative pushed by another troll group expresses dissatisfaction with financial assistance to Ukraine, levels criticism at NATO for intensifying conflicts, and challenges U.S. foreign policy—particularly its support for Israel—advocating for President Biden to adopt a stance more sympathetic to Palestine.

As this intricate web of disinformation unfolds, similar strategies observed in the recent Taiwan presidential election and the TikTok banning event in the U.S. shed further light on the pervasive tactics employed. In both instances, narratives around the regulation of social media platforms echo those seen in the EU, with troll groups vocally championing ‘freedom of speech’ as a principal argument against regulation. Furthermore, when the discourse veers towards the rise of far-right ideologies, these groups skillfully divert the conversation by asserting that the EU should prioritize resolving other more critical issues. This tactic of distraction aligns with their broader strategy of undermining focused discussions on pressing matters. Additionally, a recurring pattern emerges where media outlets that highlight misinformation become targets themselves, accused of bias or incompetence. This tactic not only challenges the credibility of the media but also attempts to dilute the severity of misinformation issues, reflecting a sophisticated approach to disrupting coherent public discourse on a global scale.

As the European Union navigates through this critical period marked by geopolitical shifts, internal challenges, and external manipulations, the role of digital platforms in shaping political narratives becomes increasingly evident. The orchestrated activities of troll groups, coupled with the strategic dissemination of narratives that echo state-affiliated media from adversarial nations, highlight a complex web of influence aimed at destabilizing public discourse and swaying electoral outcomes. This environment, ripe with disinformation and polarized ideologies, underscores the urgent need for robust mechanisms to safeguard the integrity of the democratic process. As EU citizens approach a pivotal election, setting up the AI-powered mechanism with trustworthy partners in order to establish the collective resilience of the European community against disruptions will be crucial in steering the continent towards a future that reflects its democratic values and principles, ensuring that the voice of the electorate prevails amidst the cacophony of digital warfare.

Introduction

As the European Union approaches the June 2024 European Parliament elections, it faces a complex and rapidly changing global environment. These elections come in the wake of significant global events, including the ongoing conflict in Ukraine, the aftermath of the COVID-19 pandemic, the completion of Brexit, and the anticipation of the upcoming U.S. presidential election, potentially featuring Donald Trump as a candidate. The political landscape within the EU is experiencing a noticeable shift toward right-wing ideologies, challenging traditional political factions and signaling a possible reconfiguration of power in Brussels. The upcoming elections are poised to be a pivotal moment, potentially altering the EU’s strategic direction as the new Commission may chart a different course. While the existing European Commission has introduced significant legislative measures, the forthcoming Commission is poised to chart a novel course, possibly altering the EU’s strategic direction.

Download full report :Observation of Information Manipulations on EU Parliament Elections

Observation of Information Manipulations against TikTok Ban

Executive Summary

The digital realm has witnessed TikTok’s rapid ascension, captivating a global audience with its vibrant content and advanced algorithms. Yet, this rise has been shadowed by significant controversies, especially regarding debates in the United States over a potential TikTok ban, fueled by concerns over national security, data privacy, and the spread of misinformation. This contentious issue has sparked extensive discussions among lawmakers, technology experts, and digital communities, uncovering a complex web of digital manipulation and misinformation. From January 1 to March 25, 2024, an in-depth analysis recorded the involvement of 9,080 troll accounts in these debates, accounting for 11.73% of the total dialogue, thus underscoring the significant impact of troll-driven narratives on shaping public discourse.

This conversation spans three critical incidents: legislative efforts to restrict minors’ access to social media, the Biden campaign’s strategic engagement with TikTok, and debates surrounding the enactment of H.R. 7521. Each of these scenarios has ignited varied online reactions, with a notable share of the conversation being influenced by troll accounts. For instance, the initiative in Florida to limit social media access for minors saw 12.25% of its discussion driven by troll accounts, highlighting debates on the balance between individual rights and governmental authority. The discourse concerning the Biden campaign’s use of TikTok and the legislative debates on H.R. 7521 further delve into issues of free speech, privacy, and governmental oversight, along with critiques of political leadership and representation. Trolls have extended their reach to manipulate discussions on a broad array of events, from global conflicts to international diplomacy these includes: International conflicts within European Union countries, notably Germany, Lithuania, and Sweden; the arrest of a Japanese crime boss involved in an attempt to smuggle nuclear materials to Iran; the Israel-Hamas conflict; the Ukrainian-Russian War; issues pertaining to China’s international diplomacy and geopolitical strategies, each aiming to influence public opinion and policy.

The narrative varies across different platforms—Twitter, YouTube, Weibo, and TikTok—with observed lower manipulation activity on Facebook. Discussions on Twitter often revolve around political dissatisfaction and concerns over privacy and national security, while YouTube critiques focus on U.S. leadership and TikTok’s content moderation practices. Weibo users tend to criticize U.S. policies, portraying them as bullying, whereas TikTok discussions emphasize speech restrictions and systemic critiques. These discussions often serve to challenge authority, question leadership, mobilize youth opposition, and notably, accuse the U.S. of violating First Amendment rights.

A comparison of narratives on American versus Chinese-owned social platforms reveals distinct focuses. Chinese platforms tend to argue that the U.S. approach to banning TikTok differs from that of other Western countries, suggesting that such a ban does not reflect the will of the American people and often pointing out that U.S. companies engage in more surveillance of their citizens than TikTok.

Furthermore, the analysis highlights a deliberate effort by troll accounts to echo narratives promoted by Chinese state-affiliated media, aiming to critique U.S. policies on free speech through the lens of the TikTok ban debate. By aligning with the viewpoints of outlets like Guangming Daily and Takungpao, these accounts play a pivotal role in spreading narratives that accuse the U.S. of hypocrisy regarding free speech and censorship, attempting to sway public opinion in favor of allowing TikTok to operate freely in the U.S. This concerted action underlines the strategic use of digital platforms in the broader geopolitical struggle, emphasizing the power of narrative in shaping the discourse on digital governance and international relations.

Introduction

In the past few years, the digital arena has seen the explosive growth of TikTok, a platform that has enchanted millions with its compelling content and cutting-edge algorithms. Yet, this surge in popularity is intertwined with significant controversy. Central to the discord is the ongoing debate over the potential prohibition of TikTok in the United States, driven by apprehensions regarding national security, data privacy, and the proliferation of false information. This matter has ignited intense discussions among both policymakers and tech experts, drawing widespread attention across online communities. Amidst this chaos, thorough research has uncovered a complex environment where digital manipulation and misinformation thrive, revealing the intricate challenges at the heart of modern digital discourse.

2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Executive Summary

During the tumultuous period surrounding Taiwan’s presidential election, an extensive research endeavor unveiled a nuanced landscape of digital manipulation and misinformation. At the heart of this investigation lay the pervasive influence of generative technology, which emerged as a potent tool for shaping the informational battleground. Textual misinformation, propelled by the capabilities of generative algorithms, assumed a central role, challenging traditional debunking methodologies and rendering them less effective in the face of evolving manipulation tactics. This shift underscored the pressing need to adapt information verification strategies to contend with the sophisticated mechanisms employed in modern disinformation campaigns.

Amidst the cacophony of digital discourse, a select cadre of troll groups emerged as influential arbiters of media framing, transcending mere mischief to exert substantial sway over public opinion. Contrary to initial assumptions, these groups were revealed to be non-native to Taiwan, operating with remarkable agility across linguistic and cultural boundaries. Their strategic maneuvers, characterized by narrative manipulation and geopolitical intrigue, underscored the transnational nature of contemporary information warfare and its implications for democratic processes. As the election approached, their influence surged, casting a pervasive shadow over the democratic landscape and underscoring the need for heightened vigilance against external interference.

Furthermore, collaborative efforts between mainstream entities and official media channels served to amplify the reach and impact of manipulated narratives. The alignment of these groups with state-supported agendas, particularly evident on Chinese-owned social media platforms, underscored the symbiotic relationship between information manipulation and political influence. Within this complex ecosystem, the emergence of short videos as a prevalent medium for cognitive manipulation added another layer of complexity, blurring the lines between fact and fiction in the digital realm. Thus, within the context of Taiwan’s presidential election, the convergence of technological innovation, geopolitical maneuvering, and media manipulation underscored the multifaceted challenges confronting modern democracies in safeguarding the integrity of public discourse.

Introduction

In 2024, many countries held elections, and Taiwan emerged as a benchmark for the impact of foreign information operations worldwide. As the first democratic country to conduct elections in 2024 and the pioneer in using AI technology to observe information manipulation during the election. Taiwan AI Labs hopes to share Taiwan’s experiences and lessons in dealing with information manipulation, along with the threats of generative AI, with other democratic nations.

Taiwan AI Labs, by observing and analyzing the coordinated behavior of troll accounts on social media platforms, identifies these accounts and groups them into troll groups. Utilizing large language models and AI, we developed the Infodemic platform to not only reveal the activities of these troll groups but also to delve deeper into the abnormal behaviors behind these accounts.

During this presidential election period, Taiwan AI Labs conducts weekly analyses of online anomalies using the Infodemic platform, inviting scholars and experts to discuss and share insights derived from our analyses. This report compiles the key findings from our organized data.

Download full report :2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2024 Jan W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

This week, we detected a significant number of potentially AI-generated scandalous videos, which were disseminated with the assistance of troll groups. Their aim was tarnishing the image of specific candidates.

Chinese state-affiliated media outlets this week heavily focused on topics such as “Taiwan Strait crisis” and the “high-end vaccine scandal.” troll groups followed suit, amplifying discussions related to these issues in public opinion.

In this analysis, we observed the creation of likely fake Facebook pages that initially attract the general public with video content before shifting to sharing political topics to influence readers.

Some accounts simultaneously operated on domestic and international events, with discourse highly resembling Chinese state-affiliated media (similarity scores of 42.6% and 37.2%). They actively participated in the current presidential election through Facebook groups #61009 and #61019. This week, their activities were less focused on the Ko Wen-je fan page and were primarily centered around specific candidates and the incumbent president’s fan pages, with a discourse primarily aimed at attacking a particular political party.

In the National Defense Ministry’s national-level alert event, fake accounts systematically shared information within social groups to promote the government and the ruling party in the upcoming elections.

Download full report :2024 Jan W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec W4 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

This week, AI Labs focused on troll operations and historical behavior on Facebook, organizing related information about these two groups on the Infodemic website as supplementary material.

From September to November 2023, China prominently used war threats against Taiwan, accusing the Taiwanese government of pushing the island toward the brink of war. In December, with the election approaching, the tone of war threats decreased, and China shifted to emphasize educational and economic issues, focusing on “the impact of ECFA’s termination on Taiwan’s economy” and shifting from “the wave of university closures in Taiwan” to “De-Sinicization of Taiwan’s curriculum.”

AI Labs analyzed troll groups from September to December, echoing PRC state-affiliated media to the greatest extent, and found that Facebook #61009 (42.6%) and Facebook #61019 (37.2%) had the highest resonance.

As the election approached, Facebook #61009’s narratives closely aligned with official media, focusing on war threats against Taiwan and attacking education and economic issues. Domestically, they mainly targeted Tsai Ing-wen as a ‘fake Ph.D.’; internationally, they criticized U.S. domestic issues in English, branding Biden as a dictator. They predominantly used livelihood issues as their attack strategy in both domestic and international operations.

Compared to Facebook #61009, Facebook #61019 more frequently used abusive language and continuously flooded specific content under candidates’ posts, influencing discussion content.

AI Labs analyzed troll groups from September to December that most closely echoed PRC state-affiliated media narratives, finding that Facebook #61009 (42.6%) and Facebook #61019 (37.2%) had the highest degree of resonance. This week, both groups became the top two in terms of operational volume in the New Taipei middle school student throat-slitting case, linking the incident to Tsai Ing-wen’s government and the support for abolishing the death penalty in Taiwan, thereby strengthening the image of Tsai’s government.

This week, the top volume of troll group on PTT, PTT #60001 (8.9%), used the Distort tactic to spread narratives linking the DPP with the Chinese Communist Party, Lai Ching-te’s lies, and spreading misinformation, influencing people’s perception of the facts.

This week on the TikTok platform, troll groups supporting the KMT, TikTok #74075 (75%) and TikTok #74023 (25%), intensified their efforts, participating in various issues related to the KMT, including Hou Yu-ih’s Kai Xuan Yuan controversy and Jaw Shaw-kong’s slip of the tongue. For the aforementioned issues detrimental to the Blue Camp, they used the Distract tactic, repeatedly commenting “KMT governance brings peace and security to the people, voting for all KMT candidates” to dominate related topic pages and shift the public’s focus. This action aligns with the main theme of this week’s PRC state-affiliated media, “KMT governance will improve the economy,” and echoes the operations observed on TikTok and YouTube.

This week, the controversy over Lai Ching-te’s illegal construction of his family home continued from last week’s operations on various platforms, with PRC state-affiliated media also echoing related narratives. However, after the New Taipei student throat-slitting case on the 25th, PTT and Facebook saw related narratives attacking the ruling party on the 27th, followed by PRC state-affiliated media echoing the operation of this event on the 28th.

This week’s additional main theme of PRC state-affiliated media continued last week’s operations related to ECFA, followed by focusing on the topic of grouper fish imports under the theme “KMT governance will improve the economy,” and observed echoing operations on TikTok and YouTube.

On PTT, the most operational volume on the 23rd and 24th of December was observed to be attacks on Jaw Shaw-kong and Cynthia Wu’s slips of the tongue.

Download full report :2023 Dec W4 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec W3 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

From September to November 2023, PRC state-affiliated media’s primary narrative involved threatening Taiwan with war, accusing the Taiwanese government of pushing the island towards the brink of war. In December, as the election approached, the tone of war threats diminished as the election approached. China shifted its focus to emphasize Taiwan’s educational system and economic issues, particularly highlighting the “impact of the termination of ECFA on Taiwan’s economy” and “the wave of university closures in Taiwan” to the “De-Sinicization of Taiwan’s curriculum design” as key education-related issues.

AI Labs’ analysis from September to December identified troll groups most closely echoing PRC state-affiliated media narratives, with Facebook #61009 (42.6%) and Facebook #61019 (37.2%) showing the highest level of resonance.

As the election drew closer, the narrative trends of Facebook #61009 closely aligned with China’s official media, focusing on war threats against Taiwan and primarily attacking educational and economic issues. Domestically, the group mainly targeted Tsai Ing-wen, labeling her as a ‘fake Ph.D.’; internationally, they criticized U.S. domestic issues in English, branding Biden as a dictator. Both domestic and international operations prominently used livelihood issues as their primary attack strategy.

Since September, Facebook #61019 has mirrored official media trends in the narrative of “The U.S. disregards the life and death of Taiwanese people,” with recent narratives also including war threats and economic issues. Domestically, the group focused on Tsai Ing-wen’s thesis controversy, while internationally, they attacked U.S. foreign policy failures in English, claiming a stronger voice in Taiwan favoring unification.

Regarding the coordinating behavior of this week, PRC state-affiliated media continued last week’s narrative and theme, focusing on “DPP’s election will lead to military danger” and “Termination of ECFA impacts Taiwan’s economy.” Following China’s announcement on December 21st terminating 12 ECFA tariff preferences, there was a surge in operations on PTT and YouTube.

Throughout the week, platforms in Taiwan featured narratives attacking the illegal construction at Lai Ching-te’s family home, with official media also discussing this from December 19th to 21st. Analyzing related narratives across platforms, PTT and Facebook had the highest activity levels. The groups most closely echoing official media, Facebook #61019 (33.8%) and Facebook #61009 (26.6%), were also the most active, employing tactics like repetitive comments using phrases such as “Lai Pi Liao (賴皮寮)” and “Refusing to demolish (賴著不拆)” to manipulate the discussion.

Download full report :2023 Dec W3 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec Week2 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

This week, PRC state-affiliated media has 2 major stories, “DPP’s election will lead to military danger” and “De-Sinicization of Taiwan’s curriculum”. Both are echoed by the troll groups.  The story of “DPP’s election will lead to military danger” is echoed by troll groups on Facebook. At the same time, the story of “De-Sinicization of Taiwan’s curriculum” was manipulated by PTT and TikTok troll groups.

The aggregated top stories from PRC state-affiliated media starting from October till this week were “Taiwan pushed to the boundary of military conflict” (23%), “The U.S. disregards the death of Taiwanese people” (15.6%), and “Termination of ECFA sacrifices Taiwan’s economy” (13%). The story of “Taiwan pushed towards the brink of military conflict” is decreasing, while the “Termination of ECFA sacrifices Taiwan’s economy” is growing.

In TikTok’s troll operations, the second-ranked group, TikTok #74046 (11%), underwent one account suspension and two shifts in public opinion strategy within five months. In early July, when the KMT presidential candidate Hou Yu-ih’s support was waning, the group backed the Gou-Han pairing, aiming to influence the KMT’s decision. By the end of November, as the KMT-TPP collaboration was nearing collapse, the group shifted its support to Ko Wen-je.

AI Labs analyzed comments from troll groups on YouTube from October 1st to December 10th and found that YouTube #71012 (11%), YouTube #71319 (7%), and YouTube #71341 (6%) are the most active troll groups on the platform. Among these, YouTube #71012 primarily attacks Ko Wen-je (48.1%) and supports Lai Ching-te (11.5%), YouTube #71319 mainly attacks Ko Wen-je (45.3%) and supports Hou Yu-ih (6.7%), while YouTube #71341 focuses on attacking Tsai Ing-wen (32%) and the DPP (9.9%). These groups primarily use repetitive commenting to steer discussion trends on media channels or to boost interaction on influencer channels, thereby influencing the algorithm.

Download Full Report:2023 Dec W2 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Dec W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on Manipulation Strategies

After the failure of the KMT-TPP collaboration, the top active troll account groups (PTT #60011, Facebook #61009 and Facebook #61019) intensified solely attacking Democratic Progressive Party. Before the failure of KMT-TPP collaboration, those groups also attacked the KMT.

At the same period accounting for 45.8% of all (PRC) state affiliated media news, these media top narrative promoted the “Choice Between Peace and War” concept. Within this, 30% of the themes involved misleading narratives distorted from articles published by American scholar Bonnie Glaser. The related news articles are quickly effectively distributed by Facebook troll groups.

An analysis of collaborated behavior, the researchers demonstrated that from November 1st to December 10th, the ranking highest troll activities were that Tsai Ing-wen’s Facebook fan group (34.3%), followed by Lai Ching-te (8.5%), Ko Wen-je (2,77%), Terry Gou (2.76%), Hou You-yi (2.50%). Further, two Facebook troll account groups #61009 and #61019 contributed to over 50% of total troll activities under all on fan pages. These two troll groups also actively distributed “South China Sea Working Conference” misinformation attacking the Taiwan military expenses and discrediting United States’ support in July.

Download Full Report:2023 Dec W1 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

2023 Nov W4 – 2024 Taiwan Presidential Election Information Manipulation AI Observation Report

Insights on manipulation strategies

This week, following the collapse of the KMT-TPP alliance, there was a noticeable surge in mutual criticisms between the KMT and TPP across various platforms, with the attack intensity recorded as follows: Facebook at 12.9%, YouTube at 23.7%, PTT at 26.7%, and TikTok at 26.7%. Notably, on YouTube, criticisms were more heavily directed at the KMT than TPP, whereas on other platforms, Ko Wen-je was the primary target.

An analysis of the two most active Facebook troll groups, 61009 and 61019, revealed similar patterns in their active periods and targets of criticism. Both troll groups suddenly became active on September 6th, the day candidate Terry Gou announced his running for election. They are active in similar stories and predominantly critiqued the KMT, the DPP, and Ko Wen-je, accounting for an average of 15% of their content, with relatively lesser focus on Terry Gou.

Throughout this week, Tsai Ing-wen’s Facebook fan page became a hotspot for a large number of coordinated comments. Some of these comments echoed the China state-affiliated media’s narrative, suggesting that the DPP’s ascension to power could escalate military tensions and conflict risks. Simultaneously, narratives favoring the KMT appeared on PTT, YouTube, and TikTok, which subsequently found resonance in the narratives pushed by Chinese state-affiliated media.

 

Download Full Report:2023 Nov W4 -2024 Taiwan Presidential Election Information Manipulation AI Observation Report