Ben Nimmo, a principal investigator for OpenAI, was quoted by Bloomberg as saying, “This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or US-based AI for non-democratic purposes, according to the materials they were generating themselves,”
The Microsoft-backed AI startup noted that the accounts in the network also used other AI models to develop their code, including a version of Meta’s Llama. Meta reportedly said that if its service was involved, it was likely one of many AI models, including Chinese offerings.
What were these banned accounts doing?
The banned accounts reportedly used software called “Qianyue Overseas Public Opinion AI Assistant” to send surveillance reports to Chinese authorities, intelligence agents and Chinese embassy staff. The software is said to be specifically tailored to identifying online conversations in Western countries related to human rights demonstrations in China.
According to the threat report published by OpenAI, it is against the company’s policy to use its AI for communications surveillance or unauthorised monitoring of individuals, including “on behalf of governments and authoritarian regimes that seek to suppress personal freedoms and rights”.
Meanwhile, in a statement on the matter to Bloomberg, Meta said that there is a growing availability of AI models globally, and the restrictive AI models may not matter much to bad actors given that China is already investing heavily in its AI programme. The company said, “China is already investing more than a trillion dollars to surpass the US technologically, and Chinese tech companies are releasing their own open AI models as fast as companies in the US,”
The US government has previously raised concerns about China using artificial intelligence to spread misinformation, repress its population and undermine the security of US and its allies.
openai, chatgpt, ai, artificial intelligence, openai china
#Chinas #surveillance #OpenAI #bans #accounts #ChatGPT #social #media #monitoring