Today, we’re sharing our findings into two covert influence operations — from China and Russia — that we took down for violating our policy against Coordinated Inauthentic Behavior (CIB). We shared information with our peers at tech companies, security researchers, governments and law enforcement so they too can take appropriate action. At the end of our full report, we’re also including threat indicators to help the security community detect and counter malicious activity elsewhere on the internet. See the full CIB Report for more information.
Here’s what we found:
China
We took down a small network that originated in China and targeted the United States, the Czech Republic and to a lesser extent, Chinese- and French-speaking audiences around the world. It included four largely separate and short-lived efforts, each focused on a particular audience at different times between the Fall of 2021 and mid-September 2022. In the United States, it targeted people on both sides of the political spectrum; in Czechia, this activity was primarily anti-government, criticizing the state’s support of Ukraine in the war with Russia and its impact on the Czech economy, using the criticism to caution against antagonizing China. Each cluster of accounts — around half a dozen each — posted content at low volumes during working hours in China rather than when their target audiences would typically be awake. Few people engaged with it and some of those who did called it out as fake. Our automated systems took down a number of accounts and Facebook Pages for various Community Standards violations, including impersonation and inauthenticity.
This operation ran across multiple social media, including Facebook, Instagram, Twitter and two Czech petition platforms. This was the first Chinese network we disrupted that focused on US domestic politics ahead of the midterm elections, as well as Czechia’s foreign policy toward China and Ukraine. Chinese influence operations that we’ve disrupted before typically focused on criticizing the United States to international audiences, rather than primarily targeting domestic audiences in the US. A network that we took down in 2020 included a very limited effort to post about US politics, but primarily focused on the Philippines and Southeast Asia.
Russia
We took down a large network that originated in Russia and targeted primarily Germany, and also France, Italy, Ukraine and the United Kingdom with narratives focused on the war in Ukraine. The operation began in May of this year and centered around a sprawling network of over 60 websites carefully impersonating legitimate websites of news organizations in Europe, including Spiegel, The Guardian and Bild. There, they would post original articles that criticized Ukraine and Ukrainian refugees, supported Russia and argued that Western sanctions on Russia would backfire. They would then promote these articles and also original memes and YouTube videos across many internet services, including Facebook, Instagram, Telegram, Twitter, petitions websites Change.org and Avaaz, and even LiveJournal. Throughout our investigation, as we blocked this operation’s domains, they attempted to set up new websites, suggesting persistence and continuous investment in this activity across the internet. They operated primarily in English, French, German, Italian, Spanish, Russian and Ukrainian. On a few occasions, the operation’s content was amplified by the Facebook Pages of Russian embassies in Europe and Asia.
We began our investigation after reviewing public reporting into a portion of this activity by investigative journalists in Germany. The researchers at the Digital Forensics Research Lab also provided insights into a part of this network, and we’ve shared our findings with them to enable further research into the broader operation.
This is the largest and most complex Russian-origin operation that we’ve disrupted since the beginning of the war in Ukraine. It presented an unusual combination of sophistication and brute force. The spoofed websites and the use of many languages demanded both technical and linguistic investment. The amplification on social media, on the other hand, relied primarily on crude ads and fake accounts. In fact, the majority of accounts, Pages and ads on our platforms were detected and removed by our automated systems before we even began our investigation. Together, these two approaches worked as an attempted smash-and-grab against the information environment, rather than a serious effort to occupy it long-term.
To support further research into this and similar cross-internet activities, we are including a list of domains, petitions and Telegram channels that we have assessed to be connected to the operation. We look forward to further discoveries from the research community.
The post Removing Coordinated Inauthentic Behavior From China and Russia appeared first on Meta.
source https://about.fb.com/news/2022/09/removing-coordinated-inauthentic-behavior-from-china-and-russia/
0 Comments