How Criminals Will Use Generative AI to Scam Us
The recent mainstream success of generative AI will also make cybercriminals more productive. With custom-trained rogue models that don't care about privacy, copyright, and ethics, cybercriminals could power up their impersonation scams, harassment campaigns, malware, password-guessing, and more!

The recent mainstream success of generative AI promises to make us more productive, and that includes cybercriminals. "But ChatGPT has safeguards!", I hear you say. Well, this researcher believes we could train a large learning model (LLM) as powerful as ChatGPT for $85,000. And criminals couldn't care less about privacy and copyright.
What does this mean for you? What type of threat should you prepare for? Let's speculate, based on my expertise in the information security discipline, shall we?

Impersonation Will Run Rampant
Did you receive one of these SMS allegedly signed by your company's CEO which asked for an urgent deposit? You may laugh, but the "president's con" still works just fine: a recent one in France yielded €38 million.
With generative AI, it will be possible to imitate an individual's voice, image, picture and even demeanour. The more public your CEOs are, the best AI will fake them. Chief among the AI-augmented fakes seems to be LinkedIn, which had to deal with over 20 million fake accounts in the first half of 2022 alone. The scammers are already trying to benefit from the remote work phenomenon to grab jobs using fake profiles. Imagine how long a con artist can keep the mascarade with ChatGPT on hand!
Impersonations are also striking hard in dating apps. In a space where 10% of the men receive 60% of the likes, many desperate men are vulnerable to cons. Women are also victims, remember the Tinder Swindler? When image generators create perfect copies, with voice and AI-generated text, the scammers can keep victims on the hook with much more efficacy.
All these attacks already exist. Generative AI will super-power them.
Fake AI Apps With Polymorphism
Despite Google's best efforts, its extension and app store are being bombarded with malicious apps. Dark Reading reports this week that malicious ChatGPT extensions have already made their way to people's computers.
With AI that can help you code in minutes, what can stop criminals? It's not like they have to maintain high standards of engineering, quality, or security! They can ship garbage all day long.
Not only do criminals profit right now from the AI hype to bundle their own version of a malicious ChatGPT to log your keystrokes and mine crypto in your browser, but tomorrow's AI-coded apps will render analysis much more complex.
InfoSecurity Magazine has warned us of ChatGPT-powered "polymorphic malware". Polymorphic malware is a piece of virus that rewrites automatically parts of itself in order to evade anti-virus (AV) software.
The only way to stop these from invading our systems will likely be... generative AI! Experts have already coined the term "generative adversarial networks" (GAN) to create machine learning models hunting for malicious models. Yes, this is straight out of The Matrix!
Password Guessing on Steroids
Speaking of GAN, this paper shows how machine learning can use advanced heuristics to guess passwords faster than average tools. The tool, called PassGAN, can "autonomously learn the distribution of real passwords from actual password leaks, and generate high-quality password guesses".
This means that you must, at the minimum, ensure you use two-factor authentication on your main accounts: Amazon, Facebook, Outlook, Gmail, banks, government, and Paypal.
Targeted Harassment Campaigns
Remember GamerGate? Internet trolls organized doxing and threat campaigns against female video game critics and developers. More recently, the notorious hate forum Kiwifarms, which I wrote about, led campaigns against trans people.
Now imagine these trolls using a rogue ChatGPT. Remember, $85,000 to train one. The dark web already offers ransomware-as-a-service and phishing-as-a-service. It's a matter of time before "jailbroken-LLM-as-a-service" becomes a thing. Think about all the racist and misogynistic content on the internet: a model can and will learn from that to smear people online!
What we're about to witness is an army of drones, fake individuals armed with rogue learning models, spiting defamatory and offensive nonsense on their victims. And the hallucinations will become a feature, not a bug!
What's worse? Rogue LLMs do not care about privacy or ethics! There will be nothing preventing them from gathering everything about a private individual, from the public internet or from hacked databases. "Tell me everything you know about Pierre-Paul Ferland"... That scares the bejeezus out of me.

Some Solutions...
I hope I didn't scare you too much! The bad news is that cybercriminals are likely to act faster than us good guys. Remember, they don't care about quality products and go-to-market! Short term, we need to brace ourselves!
Long term, we will need an automated large-scale capacity to detect generated content. The GAN carries promise. Most mandatory mainstream models will need in my opinion to leave an invisible trace that sensors can label as AI content. Of course, mandatory watermarks would not solve the rogue models' issue. A new generation of cyber threat hunters will emerge. They will reverse engineer suspicious outputs to discover which models were bastardized in an attempt to launch "probes" to label and block malicious content. This will be a constant arm's race. Today's spam filters and ad blockers will look primitive.
I wish I could provide you with more optimism. But let's face facts: this will be a wild ride.