Ever wonder what spies and scammers are doing with ChatGPT?
OpenAI just dropped a wild new threat report detailing how threat actors from China, Russia, North Korea, and Iran are using its models for everything from cyberattacks to elaborate schemes, and it reads like a new season of Mr. Robot.
The big takeaway: AI is making bad actors more efficient, but it's also making them sloppier. By using ChatGPT, they’re leaving a massive evidence trail that gives OpenAI an unprecedented look inside their playbooks.
1. North Korean-linked actors faked remote job applications. They automated the creation of credible-looking résumés for IT jobs and even used ChatGPT to research how to bypass security in live video interviews using tools like peer-to-peer VPNs and live-feed injectors.
2. A Chinese operation ran influence campaigns and wrote its own performance reviews. Dubbed “Sneer Review,” this group generated fake comments on TikTok and X to create the illusion of organic debate. The wildest part? They also used ChatGPT to draft their own internal performance reviews, detailing timelines and account maintenance tasks for the operation.
3. A Russian-speaking hacker built malware with a chatbot. In an operation called “ScopeCreep,” an actor used ChatGPT as a coding assistant to iteratively build and debug Windows malware, which was then hidden inside a popular gaming tool.
4. Another Chinese group fueled U.S. political division. “Uncle Spam” generated polarizing content supporting both sides of divisive topics like tariffs. They also used AI image generators to create logos for fake personas, like a “Veterans for Justice” group critical of the current US administration.
5. A Filipino PR firm spammed social media for politicians. “Operation High Five” used AI to generate thousands of pro-government comments on Facebook and TikTok, even creating the nickname “Princess Fiona” to mock a political opponent.
Why this matters: It’s a glimpse into the future of cyber threats and information warfare. AI lowers the barrier to entry, allowing less-skilled actors to create more sophisticated malware and propaganda. A lone wolf can now operate with the efficiency of a small team. This type of information will also likely be used to discredit or outright ban local open-source AI if we’re not careful to defend them (for their positive uses).
Now get this: The very tool these actors use to scale their operations is also their biggest vulnerability. This report shows that monitoring how models are used is one of the most powerful tools we have to fight back. Every prompt, every code snippet they ask for help with, and every error they try to debug is a breadcrumb. They're essentially telling on themselves, giving researchers a real-time feed of their tactics. For now, the spies using AI are also being spied on by AI.