Meta

Taking Action Against Coordinated Inauthentic Behavior in Moldova

Takeaways

  • We removed a network targeting Russian-speaking audiences in Moldova for violating our Coordinated Inauthentic Behavior policy.
  • This activity centered around fictitious, Russian-language news brands with presence on multiple internet services, including ours, Telegram, Ok[.]ru, and TikTok.
  • We removed this campaign before they were able to build authentic audiences on our apps.

As part of our regular updates on notable threat disruption efforts, we’re sharing our findings into coordinated inauthentic behavior (CIB) targeting Moldova that we disrupted in early Q3 of this year, including threat indicators linked to this activity to contribute to the security community’s efforts to detect and counter malicious activity across the internet.

As a reminder, we view CIB as coordinated efforts to manipulate public debate for a strategic goal, in which fake accounts are central to the operation. In each case, people coordinate with one another and use fake accounts to mislead others about who they are and what they are doing. When we investigate and remove these operations, we focus on behavior, not content — no matter who’s behind them, what they post or whether they’re foreign or domestic. 

Here is what we found:

We removed seven Facebook accounts, 23 Pages, one Group and 20 accounts on Instagram for violating our policy against coordinated inauthentic behavior. This network originated primarily in the Transnistria region of Moldova, and targeted Russian-speaking audiences in Moldova. We removed this campaign before they were able to build authentic audiences on our apps.

This operation centered around about a dozen fictitious, Russian-language news brands posing as independent entities with presence on multiple internet services, including ours, Telegram, OK (Odnoklassniki), and TikTok. It included brands like Tresh Kich, Moldovan Mole, Insider Moldova, Gagauzia on Air.

The individuals behind this activity used fake accounts – some of which were detected and disabled prior to our investigation – to manage Pages posing as independent news entities, post content, and to drive people to this operation’s off-platform channels, primarily on Telegram. Some of these accounts went through significant name changes over time and used profile photos likely created using generative adversarial networks (GAN). 

They posted original content, including cartoons, about news and geopolitical events concerning Moldova. It included criticism of President Sandu, pro-EU politicians, and close ties between Moldova and Romania. They also posted supportive commentary about pro-Russia parties in Moldova, including a small fraction referencing exiled oligarch Shor and his party. The operators also posted about offering money and giveaways, including food and concert tickets, if people in Moldova would follow them on social media or make graffiti with the campaign’s brand names. 

This campaign frequently posted summaries of articles from a legitimate news site point[.]md, but with an apparent pro-Russia and anti-EU slant added by the operators. They also amplified a Telegram channel of the host of a satirical political show in Moldova critical of pro-European candidates. One of this operation’s branded Telegram channels was promoted by a Page we removed last quarter as part of a Russia-origin CIB network (case #3 in the Q2 2024 report). 

We found this network as part of our internal investigation into suspected coordinated inauthentic behavior in the region. Although the people behind this activity attempted to conceal their identity and coordination, our investigation found links to individuals from Russia and Moldova operating from the Transnistria region, including those behind a fake engagement service offering fake likes and followers on Facebook, Instagram, YouTube, OK, VKontakte, X and the petition platform Change.org. We also found some limited links between this CIB activity and a network from the Luhansk region in Ukraine that we removed in December 2020.

Threat indicators

This section details unique threat indicators that we assess to be associated with the malicious network we disrupted. It is not meant to provide a full cross-internet, historic view into this operation. It’s important to note that, in our assessment, the mere sharing of these operations’ links or engaging with them by online users would be insufficient to attribute accounts to a given campaign without corroborating evidence. 

To help the broader research community to study and protect people across different internet services, we’ve collated and organized these indicators according to the Online Operations Kill Chain framework, which we use to analyze many sorts of malicious online operations, identify the earliest opportunities to disrupt them, and share information across investigative teams. The kill chain describes the sequence of steps that threat actors go through to establish a presence across the internet, disguise their operations, engage with potential audiences, and respond to takedowns. 

As part of our next threat reporting cycle, we’ll be adding these threat indicators to our public repository on GitHub

Phase: Acquiring assets

Tactic: Acquiring Facebook accounts, Pages, Groups, Instagram accounts

Tactic: Acquiring TikTok accounts

Tactic: Acquiring Telegram channels

Tactic: Acquiring other social media assets

Phase: Disguising assets

Tactic: Creating fictitious news outlets

Tactic: Adopting visual disguise

Phase: Evading detection

Tactic: Camouflaging content

Phase: Targeted engagement

Tactic: Running Ads

Tactic: Engaging with users outside the operation

Tactic: Engaging with specific audience

Tactic: Directing online traffic

Tactic: Posting about individuals or institutions