OpenAI uncovered evidence of a Chinese security operation using artificial intelligence-powered surveillance tools to monitor anti-Chinese posts on social media in Western countries. The campaign, called Peer Review, was identified when someone used OpenAI’s technology to debug code for the tool. This marks the first time OpenAI has found an A.I.-powered surveillance tool like this. While there are concerns about A.I. being misused for surveillance and other malicious activities, researchers believe it can also be utilized to identify and prevent such behavior.
The Chinese surveillance tool is thought to be based on Llama, an A.I. technology developed by Meta that was open sourced, allowing other developers access to the code. OpenAI also discovered another Chinese campaign, Known as Sponsored Discontent, that used OpenAI’s technology to create English-language posts criticizing Chinese dissidents. Additionally, a campaign from Cambodia was found to be generating and translating social media comments to facilitate a scam called “pig butchering” using A.I.-generated content to engage men in an investment scheme.
OpenAI has published a detailed report on the malicious use of A.I., including these campaigns. The company is also facing legal action from The New York Times for copyright infringement related to news content on A.I. systems, which OpenAI and Microsoft have denied. OpenAI researchers are continuously monitoring and uncovering instances where A.I. technology is used for deceptive purposes, hoping to shed light on these activities and mitigate their impact.
Note: The image is for illustrative purposes only and is not the original image of the presented article.