AI supercharges cybercrime as scams grow faster, cheaper and harder to spot

2 hours ago 670

Cybercriminals are using artificial intelligence to pull off more elaborate schemes. They’re targeting everything from retirement savings to corporate secrets with methods that keep getting harder to spot.

The same technology that tailors advertisements to online shoppers is now being used by bad actors to gather personal details and launch custom scams fast. Really fast.

Major AI companies like Anthropic, OpenAI, and Google report that criminals are tapping into their platforms to orchestrate complex phishing operations, develop harmful software, and execute various digital attacks. Security specialists warn that criminals are also producing fake audio and video clips of company leaders to trick employees into giving up sensitive information.

Businesses and government offices may soon face swarms of AI-powered systems that can spot weaknesses in computer networks and then plan and carry out attacks with almost no human help.

The technology is changing how criminals operate online. Alice Marwick heads research at Data & Society, an independent technology research organization. She told the Wall Street Journal that the biggest shift involves size and reach. “The real change is scope and scale. Scams are bigger, more targeted, more convincing.”

Brian Singer is a doctoral student at Carnegie Mellon University. He studies how large language models are used in cyberattacks and defenses. His estimate? Half to three-quarters of worldwide spam and phishing messages now come from AI systems.

The attacks themselves have gotten more believable. AI systems trained on company communications can produce thousands of messages that sound natural and match a company’s style. They copy how executives write. They mention recent news found in public records.

The technology also helps overseas scammers hide language mistakes that used to make their attempts obvious. Criminals can impersonate victims through fake videos and copied voices. They use the same fake identity to target several people at once.

John Hultquist is chief analyst at Google Threat Intelligence Group. He describes the main shift as “credibility at scale.”

Bad actors are also getting better at picking targets. They use AI to look through social media and find people dealing with major life difficulties. Divorce, family deaths, losing a job, and situations that might make someone more vulnerable to romance tricks, investment fraud, or fake job offers.

Dark web markets lower entry barrier

The barrier to entry for cybercrime has dropped. Underground markets now sell or rent AI tools for criminal work for as little as $90 each month. Nicolas Christin leads Carnegie Mellon’s software and societal systems department.

He said that these platforms come with different pricing levels and customer help. “Developers sell subscriptions to attack platforms with tiered pricing and customer support.”

These services go by names like WormGPT, FraudGPT, and DarkGPT. They can create harmful software and phishing campaigns. Some even include teaching materials on hacking techniques.

Margaret Cunningham is vice president of security and AI strategy at Darktrace, a security company. She says it’s simple. “You don’t need to know how to code, just where to find the tool.”

There’s a recent development called vibe-coding or vibe-hacking. It could let aspiring criminals use AI to make their own malicious programs rather than purchasing them from underground sources. Anthropic disclosed earlier this year that it had stopped several attempts to use its Claude AI for creating ransomware by “criminals with few technical skills.”

Criminal operations themselves are changing. Cybercrime has worked like a business marketplace for years now, according to experts. A typical ransomware operation involved different groups. Access brokers who broke into company networks and sold entry. Intrusion teams who moved through systems stealing data. And ransomware-as-a-service providers who released the malware, handled negotiations, and divided the money.

Speed and automation reshape criminal networks

AI has increased the speed, size, and availability of this system. Work previously done by people with technical knowledge can now run automatically. This lets these groups operate with fewer people, less risk, and higher profits. “Think of it as the next layer of industrialization. AI increases throughput without requiring more skilled labor,” Christin explains.

Can AI launch attacks completely on its own? Not quite yet. Experts compare the situation to the push for fully self-driving vehicles. The first 95% has been achieved. But the final part that would let a car drive anywhere, anytime by itself remains out of reach.

Researchers are testing AI’s hacking abilities in lab environments. A team at Carnegie Mellon, supported by Anthropic, recreated the famous Equifax data breach using AI earlier this year. Singer led the work at Carnegie Mellon’s CyLab Security and Privacy Institute. He calls it “a big leap.”

Criminals exploit AI for harmful purposes. But AI companies say the same tools can help organizations strengthen their digital defenses.

Anthropic and OpenAI are building AI systems that can continuously examine software code to locate weaknesses that criminals might exploit. People still must approve any fixes though. A recent AI program developed by Stanford researchers performed better than some human testers when searching for security problems in a network.

Even AI won’t stop all breaches. That’s why organizations must focus on creating tough networks that keep working during attacks, Hultquist says.

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Read Entire Article