Pro-Marcos Jnr Philippines-originated ChatGPT accounts banned over AI misuse
OpenAI, the creator of ChatGPT, said the banned accounts were used to influence views favourable towards the Philippine president

OpenAI has banned a network of ChatGPT accounts originating from the Philippines that used its platform to generate social media comments praising President Ferdinand Marcos Jnr as the crackdown highlights how artificial intelligence (AI) could be misused to conduct influence operations.
The US-based AI pioneer said it had identified the accounts using ChatGPT to generate short comments in English and Filipino, which were later posted on Facebook and TikTok. It dubbed the operation “High Five” because many of the comments included emojis, according to a report released on June 5.
“The comments this operation generated and posted online were brief but partisan. Typically, they praised President Marcos and his initiatives, or criticised [Vice-President Sara Duterte-Carpio],” OpenAI wrote in its report.
The company also noted that some comments nicknamed the vice-president “Princess Fiona”, possibly in reference to the Shrek movie series. Memes portraying Duterte-Carpio as the ogre princess from the films have been circulating online in recent years.
OpenAI said the operation appeared to follow a three-stage process.
First, the actors used ChatGPT to analyse social media posts about political developments involving Marcos Jnr and asked the model to suggest themes for replies. Next, they used the model to generate huge quantities of short comments – typically no more than 10 words – based on those themes. Finally, they prompted ChatGPT to produce public relations pitches and statistical analyses for “covert influence operation”.
According to the report, five TikTok channels were created for the operation, which were aimed at “promoting President Marcos’ agenda”. These TikTok channels appeared to have started posting content in mid-February this year, such as repeating videos with different captions.

Dozens of TikTok accounts would then reply to the videos using the comments generated by the operation.
“The TikTok accounts that posted the comments did not post any videos, did not follow any other accounts, and typically had 0–10 followers. This commenting activity may have been designed to make the TikTok channels look more popular than they actually were,” OpenAI said in its report.
The generated comments were also shared on Facebook posts published by mainstream media outlets, usually posted by accounts created in the middle of December last year – all of which had a zero friend count.
OpenAI said it had traced the online activities to Comm & Sense, a marketing and public relations company based in Makati City. This Week in Asia has reached out to Comm & Sense for comment.
Its report also described other banned China-linked influence activities, and intelligence gathering involving individuals posing as journalists based in Europe or Turkey.
“In the content that they generated and posted online, the operators described ‘Focus Lens News’ as an independent European-based entity specialising in analysis and reporting,” the report read.
Other activities detailed in the report included generating social media comments in Chinese, English, and Urdu related to political topics relevant to China, such as Taiwan, and the shutdown of the US Agency for International Development.

AI-enabled disinformation campaign
Observers said that the activities detected by OpenAI indicated how AI – particularly large language models (LLMs), which are trained to generate humanlike text – had transformed influence and disinformation operations.
“[LLMs] such as ChatGPT have fundamentally altered the landscape of influence operations by dramatically lowering the cost, effort, and skills required to produce convincing disinformation at scale,” Dominic Ligot, the founder of the social impact AI enterprise CirroLytix and data ethics organisation Data Ethics PH, told This Week in Asia.
The capacity of such influence operations had increased exponentially, with numerous humanlike posts quickly generated, Ligot said. These operations could hide traces of copying and pasting, as well as manipulate information and falsely reflect a diversity of views across languages and cultural contexts, he added.
Benito Teehankee, a professor of business ethics at De La Salle University, said that LLMs such as ChatGPT had a huge impact on online disinformation because of their ability to mass customise content at the same time.
“Bad actors can mass generate manipulative messaging and transmit these through various platforms and channels blindingly fast. At the same time, these actors can adapt the core disinformation message to as many target groups in terms of interests, language, media preferences,” he told This Week in Asia.
There is more incentive financially to generate disinformation than accurate information, according to Teehankee.
Citizen movements and journalism are the only known counterforce to thisBenito Teehankee, De La Salle University academic
Ligot warned that disinformation actors could circumvent loopholes in content filters run on private servers in the Philippines and elsewhere to disseminate large volumes of propaganda content and avoid the robust oversight provided by the likes of LLM providers, such as OpenAI.
To deal with the threat, Ligot proposed requiring online platforms to disclose the use of AI in generating content, especially for political topics, and creating watermarks or metadata to trace its original sources.
The authorities could also strengthen regulations to deter disinformation including penalising online platforms for failure to curb perpetrators.
But Teehankee argued that there might be “little political appetite” to do so as some politicians could benefit from disinformation campaigns. Compounding the problem was big tech companies lobbying governments for a loose regulatory approach towards AI, he added.
“Politicians and tech companies have developed a dysfunctional synergy that greatly weakens democracy more than ever. Citizen movements and journalism are the only known counterforce to this.”