,

Taiwan AI Labs Founder Ethan Tu at NATO Summit

Photo Caption: Ethan Tu, founder of Taiwan AI Labs, was invited to Washington, D.C. to attend the NATO summit. He delivered a speech on the topic of information warfare during the 2024 European Parliament elections and engaged in discussions with the Special Competitive Studies Project (SCSP), an AI and national security think tank based in Washington.

 

[Taipei, Taiwan, July 12, 2024]  The manipulation of information by certain forces to influence election outcomes has become a global concern. On July 9, Ethan Tu, founder of Taiwan AI Labs, attended the NATO summit in Washington, D.C., where he delivered a speech addressing the issue of information manipulation during the 2024 European Parliament elections. Tu called for the establishment of trustworthy AI verification mechanisms to address the current crisis of information warfare facing democracies.

Taiwan AI Labs’ cognitive security tool, Infodemic, was utilized to analyze 26,011 social media battlefields and 335,045 major news stories across multiple platforms, including Facebook, YouTube, and X, from July 1, 2023, to June 24, 2024. According to Infodemic’s analysis, during the EU Parliament Election period, there were 20,041 troll group accounts displaying abnormal behavior, not operated by genuine users. These troll groups accounted for 12.58% of the discussions related to the EU Parliament Election.

The most influential troll groups were divided into two main categories. One group focused on Eastern European geopolitics and international military issues, such as the Russia-Ukraine war and NATO actions. The other group concentrated on attacking Western democratic systems, particularly the UK and the US, spreading negative narratives about national leaders and government decisions, aiming to undermine public confidence in democratic institutions. Further analysis revealed that these groups operated transnationally, also engaging in issues related to the US presidential election.

Troll Groups Align with Authoritarian State Media: European Parliament Faces Information Warfare Similar to Taiwan’s Presidential Election

The analysis indicated that these troll groups shaped public opinion by criticizing mainstream media for lack of impartiality, amplifying Russian propaganda, and attacking EU digital regulations aimed at combating misinformation and hate speech. This rhetoric potentially fosters far-right conservative ideologies and aims to instill doubt about the threats posed by the rise of extremism.

Moreover, during the same period, Infodemic used FedGPT to analyze 1,624 major news events, finding that 16.07% were related to Russian and Chinese state media. Notably, during the European Parliament election, on key issues such as the Russia-Ukraine war, energy security, digital regulation, immigration, and climate, the narratives of these troll groups aligned with those of Russian and Chinese state media, challenging the EU’s leadership narrative.

For instance, comparing July-August 2023 to September-October 2023, online attacks on the EU’s support for Ukraine increased from 30.5% to 34.97%. Negative public opinion on economic issues, particularly energy security, accounted for 22.04% of troll group narratives in September-October 2023, making it one of the largest battlegrounds during the election period.

Tu specifically pointed out that these opinion manipulation strategies are similar to phenomena observed by Infodemic during Taiwan’s 2024 presidential election. Troll groups exploited issues like military spending and energy security to erode public trust in the government and Taiwan’s international allies. Compared to traditional election propaganda, the information warfare strategies used by these troll groups are more sophisticated. They do not merely spread misinformation on single issues but weave targeted narratives across multiple countries and topics into everyday online interactions, deepening their impact on public opinion. This presents a significant challenge to democratic governance.

Global Stability Threats Extend Beyond Physical Battlefields: Developing Trustworthy AI Mechanisms to Counter Information Warfare

In the NATO summit speech, Taiwan AI Labs built on its recent engagement at the inaugural AI Expo for National Competitiveness held in May 2024 in the U.S. This event facilitated another critical exchange with the influential Washington think tank, the Special Competitive Studies Project (SCSP), focusing on the pressing issues of information warfare and national security.

Photo Caption: At the first AI Expo for National Competitiveness, Taiwan AI Labs was invited to participate. Founder Ethan Tu delivered a speech, sharing Taiwan’s experience in developing large open-source language models TAIDE and TAME, as well as further establishing the alliance-level FedGPT. He discussed applying AI technology to identify information manipulation and protect digital democracy.

 

Before the NATO summit, SCSP Executive Director Ylli Bajraktari published an article on Project Syndicate, a leading global commentary website, titled “AI-Augmented Disinformation Is NATO’s New Battlefield.” Bajraktari emphasized that the threats to global stability have surpassed traditional military domains. NATO member states must directly respond to cognitive warfare initiated by hostile authoritarian regimes. Specific measures include investing in tools capable of identifying content authenticity, such as large language models (LLMs) and AI classifiers, to detect AI-generated or altered content and identify malicious activities by adversarial factions on digital platforms, thereby reducing their influence on the public. Additionally, given that information manipulation often transcends national borders, NATO should ally with governments, civil organizations, and private companies to establish early warning systems and collaborate to counter large-scale information warfare.

Ehtan Tu stated that 2024 is a global election year, and echoing Ylli Bajraktari’s views, it is crucial to collaborate with reliable partners to establish AI mechanisms that expose information manipulation tactics by coordinated online accounts, thereby enhancing the resilience of democratic systems against threats such as disinformation and ideological polarization. Taiwan AI Labs’ Infodemic platform is an AI-driven cognitive security platform designed to detect modern information warfare practices, track the spread of false information, and reveal the targets, tactics, and strategies employed by adversaries that may impact national security.