πŸ’° Read News and Earn $USDT Β· Cryptews β€” Read to Earn Platform Get Started

Brussels opens separate AI safety talks with OpenAI and Anthropic

2 hours ago 931

On Monday, the European Commission said it is talking to OpenAI and Anthropic about their AI models, but the two conversations look nothing alike.

Thomas Regnier, a spokesman for the Commission in Brussels, told reporters that OpenAI was the first to offer access to its newest model. The Commission called the move proactive.

Anthropic, on the other hand, has met with officials four or five times, but model access hasn’t come up yet.

β€œWith one (OpenAI), you have a company proactively offering to give access to the company. With the other one (Anthropic), we have good exchanges though we’re not at a stage where we can speculate on potential access or not,” Regnier said during the briefing.

Governments are tightening the circle around frontier AI

The timing isn’t random. Governments everywhere are paying attention to advanced AI systems.

This year, the EU’s AI Act began to take effect. It’s rolling out in stages. The regulation provides risk-based rules for AI providers working in Europe.

Companies are not required to give the government models to look over before putting them on the market, but it’s clear that the Commission wants to know more about what’s coming.

OpenAI and Anthropic are the two biggest American companies working on cutting-edge AI.

Their AI systems keep getting more capable, and regulators are scrambling to figure out what that actually means.

The Commission’s audiovisual service listed the OpenAI and Anthropic discussion under β€œCybersecurity” on the May 11 midday briefing agenda. It’s listed alongside defense policy and migration.

Washington is moving toward mandatory pre-release checks

While Brussels talks, Washington might mandate.

The White House is considering a plan to review the most powerful AI models before they go public.

That idea gained steam after Anthropic held back broader access to its latest model, Mythos, when internal tests showed it could find exploitable software bugs at scale.

The UK AI Safety Institute ran into similar stuff in April when it evaluated OpenAI’s GPT-5.5.

The model reverse-engineered a custom virtual machine and solved a complex challenge faster than a human expert, according to findings Cryptopolitan previously reported.

The smartest crypto minds already read our newsletter. Want in? Join them.

Read Entire Article
πŸ’¬ Comments
Loading…

Log in to leave a comment.