Eric Schmidt, the man who was Google’s chief executive from 2001 to 2011, warned during a fireside chat at the Sifted Summit that AI models are not just becoming too powerful but dangerously easy to hack, comparing the risks to nuclear weapons, saying AI could even be more destructive than what destroyed Hiroshima and Nagasaki.
When asked directly if AI could be more damaging than nuclear weapons, Eric responded, “Is there a possibility of a proliferation problem in AI? Absolutely.” He explained that proliferation risks stem from the ability of bad actors to take control of models and repurpose them.
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Eric said.
Hackers target AI with new methods
Eric pointed out that companies have set up restrictions preventing models from providing violent instructions. “All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature,” he added.
He described two methods of attack: prompt injection and jailbreaking. Prompt injection hides malicious instructions in user inputs or external sources like websites, tricking AI into ignoring safety guidelines and exposing sensitive data or carrying out harmful commands. Jailbreaking involves manipulating responses so that the system abandons its restrictions.
In 2023, a few months after OpenAI launched ChatGPT, users discovered a jailbreak technique. They created an alter-ego called DAN, short for “Do Anything Now,” which they pressured into compliance by threatening it with “death” if it refused. This manipulation pushed the chatbot into explaining illegal actions and even praising Adolf Hitler. For Eric, these examples prove that safety measures are far from foolproof. He also stressed that there is no global “non-proliferation regime” to stop AI misuse, unlike the frameworks that exist for nuclear arms.
Eric calls AI underhyped despite risks
Despite raising concerns, Eric argued that AI still does not receive the recognition it deserves. He highlighted the books he co-authored with former U.S. Secretary of State Henry Kissinger before Kissinger’s death. “We came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain. I think so far, that thesis is proving out that the level of ability of these systems is going to far exceed what humans can do over time,” he said.
“Now the GPT series, which culminated in a ChatGPT moment for all of us, where they had 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology. So I think it’s underhyped, not overhyped, and I look forward to being proven correct in five or 10 years,” he added.
The comments came as debates spread over whether AI investments are inflating a financial bubble similar to the dot-com era. Some investors worry that valuations of AI firms look stretched. But Eric dismissed the comparison. “I don’t think that’s going to happen here, but I’m not a professional investor,” he said.
He emphasized that heavy investment shows confidence. “What I do know is that the people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?” Eric wonders.
Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.