OpenAI shuts down election influence operation that used ChatGPT
OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election
OpenAI has banned a cluster of ChatGPT accounts linked to an Iranian influence operation that was generating content about the U.S. presidential election, according to a blog post on Friday. The company says the operation created AI-generated articles and social media posts, though it doesn’t seem that it reached much of an audience.
This is not the first time OpenAI has banned accounts linked to state-affiliated actors using ChatGPT maliciously. In May the company disrupted five campaigns using ChatGPT to manipulate public opinion.
These episodes are reminiscent of state actors using social media platforms like Facebook and Twitter to attempt to influence previous election cycles. Now similar groups (or perhaps the same ones) are using generative AI to flood social channels with misinformation. Similar to social media companies, OpenAI seems to be adopting a whack-a-mole approach, banning accounts associated with these efforts as they come up.
OpenAI says its investigation of this cluster of accounts benefited from a Microsoft Threat Intelligence report published last week, which identified the group (which it calls Storm-2035) as part of a broader campaign to influence U.S. elections operating since 2020.
Article