How Criminals Will Use Generative AI to Scam Us

The recent mainstream success of generative AI will also make cybercriminals more productive. With custom-trained rogue models that don't care about privacy, copyright, and ethics, cybercriminals could power up their impersonation scams, harassment campaigns, malware, password-guessing, and more!

How Criminals Will Use Generative AI to Scam Us

The recent mainstream success of generative AI promises to make us more productive, and that includes cybercriminals. "But ChatGPT has safeguards!", I hear you say. Well, this researcher believes we could train a large learning model (LLM) as powerful as ChatGPT for $85,000. And criminals couldn't care less about privacy and copyright.

What does this mean for you? What type of threat should you prepare for? Let's speculate, based on my expertise in the information security discipline, shall we?


Impersonation Will Run Rampant

Did you receive one of these SMS allegedly signed by your company's CEO which asked for an urgent deposit? You may laugh, but the "president's con" still works just fine: a recent one in France yielded €38 million.

With generative AI, it will be possible to imitate an individual's voice, image, picture and even demeanour. The more public your CEOs are, the best AI will fake them. Chief among the AI-augmented fakes seems to be LinkedIn, which had to deal with over 20 million fake accounts in the first half of 2022 alone. The scammers are already trying to benefit from the remote work phenomenon to grab jobs using fake profiles. Imagine how long a con artist can keep the mascarade with ChatGPT on hand!

Impersonations are also striking hard in dating apps. In a space where 10% of the men receive 60% of the likes, many desperate men are vulnerable to cons. Women are also victims, remember the Tinder Swindler? When image generators create perfect copies, with voice and AI-generated text, the scammers can keep victims on the hook with much more efficacy.

All these attacks already exist. Generative AI will super-power them.


Fake AI Apps With Polymorphism

Despite Google's best efforts, its extension and app store are being bombarded with malicious apps. Dark Reading reports this week that malicious ChatGPT extensions have already made their way to people's computers.

With AI that can help you code in minutes, what can stop criminals? It's not like they have to maintain high standards of engineering, quality, or security! They can ship garbage all day long.

Not only do criminals profit right now from the AI hype to bundle their own version of a malicious ChatGPT to log your keystrokes and mine crypto in your browser, but tomorrow's AI-coded apps will render analysis much more complex.

InfoSecurity Magazine has warned us of ChatGPT-powered "polymorphic malware". Polymorphic malware is a piece of virus that rewrites automatically parts of itself in order to evade anti-virus (AV) software.

The only way to stop these from invading our systems will likely be... generative AI! Experts have already coined the term "generative adversarial networks" (GAN) to create machine learning models hunting for malicious models. Yes, this is straight out of The Matrix!  


Password Guessing on Steroids

Speaking of GAN, this paper shows how machine learning can use advanced heuristics to guess passwords faster than average tools. The tool, called PassGAN, can "autonomously learn the distribution of real passwords from actual password leaks, and generate high-quality password guesses".

This means that you must, at the minimum, ensure you use two-factor authentication on your main accounts: Amazon, Facebook, Outlook, Gmail, banks, government, and Paypal.


Targeted Harassment Campaigns

Remember GamerGate? Internet trolls organized doxing and threat campaigns against female video game critics and developers. More recently, the notorious hate forum Kiwifarms, which I wrote about, led campaigns against trans people.

Now imagine these trolls using a rogue ChatGPT. Remember, $85,000 to train one. The dark web already offers ransomware-as-a-service and phishing-as-a-service. It's a matter of time before "jailbroken-LLM-as-a-service" becomes a thing. Think about all the racist and misogynistic content on the internet: a model can and will learn from that to smear people online!  

What we're about to witness is an army of drones, fake individuals armed with rogue learning models, spiting defamatory and offensive nonsense on their victims. And the hallucinations will become a feature, not a bug!

What's worse? Rogue LLMs do not care about privacy or ethics! There will be nothing preventing them from gathering everything about a private individual, from the public internet or from hacked databases. "Tell me everything you know about Pierre-Paul Ferland"... That scares the bejeezus out of me.


Some Solutions...

I hope I didn't scare you too much! The bad news is that cybercriminals are likely to act faster than us good guys. Remember, they don't care about quality products and go-to-market! Short term, we need to brace ourselves!

Long term, we will need an automated large-scale capacity to detect generated content. The GAN carries promise. Most mandatory mainstream models will need in my opinion to leave an invisible trace that sensors can label as AI content. Of course, mandatory watermarks would not solve the rogue models' issue. A new generation of cyber threat hunters will emerge. They will reverse engineer suspicious outputs to discover which models were bastardized in an attempt to launch "probes" to label and block malicious content. This will be a constant arm's race. Today's spam filters and ad blockers will look primitive.

I wish I could provide you with more optimism. But let's face facts: this will be a wild ride.


Latest In Tech

Privacy and Cybersecurity

  • A USB Drive Explodes in a Journalist's Face. USB flash drives were notoriously used by the US and Israeli intelligence to deliver the Stuxnet computer worm to Iranian nuclear facilities. Now, criminals loaded them with RDX, an explosive. So yeah, don't plug USB drives you've found laying around, ever. Story
  • JPMorgan testing Palm and Face to pay. This is one I don't get. What's more convenient than using your watch? Biometrics can change. Once they are corrupted, you cannot rotate them. No thanks. Story

Business of Tech

  • The Cloud gaming wars have begun. Microsoft is closing in on its acquisition of video game giant Activision. It's reportedly planning a gaming app store after EU regulations will force Apple to allow third-party app stores on the iPhone.  Meanwhile, Netflix plans to expand its game apps to all devices in a push to become a cloud gaming provider. Netflix is a notorious user of Amazon Web Services. I wonder if Netflix can pull off cloud gaming at a reasonable cost without owning its data centers like Microsoft does.  Story
  • The Internet Archive Fighting For its Life. Editor Hachette claims the online archive allowing users to check out digital books is a copyright violation, to which judges agreed. The Internet Archive, which maintains the beloved Wayback machine, could face financial penalties that would force it to shut down. Of all the things online, why would you want to go against a non-profit public library? Hachette looks like a bunch of sleazeballs here.  Story

Artificial Intelligence

  • OpenAI releases research on which jobs will get impacted by its technology. Accountants, auditors, news analysts, legal secretaries, administrative assistants, clinical data managers, tax preparers, mathematicians, and web designers are among the occupations that are "100% exposed" to AI disruption. My initial reaction was that the researchers probably had an axe to grind with mathematicians.  Story
  • ChatGPT plugins and integrations keep coming. OpenAI announced a series of ChatGPT plugins that allow it to integrate with Instacart and other AI providers. Meanwhile, Canva and Adobe both announced a suite of AI tools to edit images, generate images from text, and generate presentations. I am also curious about "AI native" apps: apps that have been conceived from the ground up as pure AI products.
  • The Verge tested Google's Bard. The whole text is rather uneventful. Bard looks quite like Bing and ChatGPT, which may or may not be a good thing.  Story

❓ Question of the Week

What's the most sophisticated online scam you've been subject to?


🥳
Thank you for reading!

If you like my content, subscribe to the newsletter with the form below.

Cheers,
PP