AI startups are increasingly struggling with a type of fraud that barely existed a few years ago: automated users signing up in bulk to drain expensive computing resources before companies can stop them.
Stripe Chief Executive Patrick Collison said the problem has become widespread among AI firms using the companyβs payment infrastructure. Speaking on the TBPN podcast, Collison said roughly one in six new accounts created on some AI platforms now appears to be fraudulent.
The abuse centers on inference tokens, the computing credits required to run AI models. Fraudsters create fake accounts, consume the free allocations offered to new users, then disappear without paying. In some cases, access is reportedly resold through online channels that distribute low-cost AI credentials.
Fortune reported details from Stripe executives on May 7.
Stripβs Collison warns AI companies are facing a new type of fraud
Β
The issue is hitting startups particularly hard because AI products carry real usage costs from the moment someone begins interacting with a model. Unlike traditional software companies, AI firms cannot onboard millions of free users without paying for the underlying compute power needed to process prompts and generate responses.
Emily Sands, Stripeβs Head of Data and AI, said some attackers are operating at speeds that make manual fraud reviews ineffective.
βOne of the things thatβs really scary about that is that these attackers can burn inference costs, can rack up massive usage bills that they never intend to pay, and they can do that very, very quickly because they are consuming tokens at machine speed,β Sands told Fortune.
According to Sands, abuse involving AI free trials has more than doubled over the past six months.
Researchers tracking AI security vulnerabilities say the attacks often exploit weak credential controls rather than sophisticated hacking techniques. Many AI systems still rely on broad API permissions that allow automated agents to access large portions of backend infrastructure once credentials are obtained.
A March 2026 report from security research firm Grantex found that most leading open-source AI agent projects lacked granular identity separation between agents, making it difficult to isolate compromised accounts without rotating entire system credentials.
The broader market for stolen credentials is also expanding. Cybersecurity company SpyCloud said it recovered 18.1 million exposed API keys and machine credentials from criminal marketplaces in 2025, including millions tied to AI-related services.
Some startups are beginning to changeΒ how they handle user acquisition
Some startups are already changing how they handle user acquisition because of the rising costs. Industry executives say companies that once relied heavily on free trials are now shortening trial periods, imposing stricter rate limits, or requiring payment details earlier in the signup process.
Stripe said it has expanded its Radar fraud-detection system to evaluate AI account registrations using indicators such as device fingerprints, IP reputation, and email-domain history. The company said the system blocked more than 3.3 million potentially risky signups across eight AI companies during the past month.
The company is also exploring payment systems designed to reduce unpaid usage altogether. Stripe has backed a blockchain-based project called Tempo that would allow AI services to charge customers continuously as compute resources are consumed.
Crypto exchange Coinbase is developing a similar system known as x402, focused on real-time payments between applications and APIs.
Supporters of the approach believe instant settlement could reduce fraud exposure by removing the delay between resource consumption and payment collection.
Even so, security analysts say the problem reflects a broader tension inside the AI industry: startups are racing to grow as quickly as possible while many of the underlying security and identity systems remain immature.
If you're reading this, youβre already ahead. Stay there with our newsletter.



















English (US)