The battle between artificial intelligence companies has jumped from the tech world straight into American politics. Anthropic announced Thursday it will pour $20 million into races this midterm election season.
The money goes to Public First Action, a newly formed group that wants states to keep their power to write AI rules. That puts Anthropic on a collision course with both OpenAIβs political operation and the Trump White House, which wants Washington to take control of AI policy nationwide.
βThe companies building AI have a responsibility to help ensure the technology serves the public good, not just their own interests,β Anthropic said in Thursdayβs announcement.
The group is backing candidates who oppose efforts to strip states of their authority over AI technology. One early beneficiary is Marsha Blackburn, the Republican running for Tennessee governor, who fought against federal bills that would have blocked state legislatures from passing their own AI laws.
Public First Action faces steep odds against Leading the Future, the opposing group backed by OpenAI president Greg Brockman and tech investor Marc Andreessen. That operation has collected $125 million since launching in August 2025. Andreessenβs venture firm A16Z holds a stake in OpenAI, making the funding fight even more personal between the rival AI developers.
Trumpβs December executive order escalates the battle
President Trump signed an order in December that directly threatens the state laws Anthropic wants to protect. The directive tells federal agencies to build a national AI framework with minimal rules, then use it to override tougher state regulations.
Trumpβs order goes further by creating a Justice Department task force specifically designed to challenge state AI laws in court. States with rules Trump considers too strict could lose federal funding. His AI advisor, David Sacks, already singled out Coloradoβs law as βprobably the most excessiveβ one on the books.
Several states have regulations taking effect or moving through legislatures in 2026. Colorado delayed its AI Act until June 30, 2026, after facing pressure, but the law will still require companies building βhigh-riskβ AI systems to prevent discrimination in their algorithms. California passed seven AI laws in 2025, with its Transparency in Frontier AI Act starting January 1, 2026. Texas banned AI use for certain purposes through its Responsible AI Governance Act.
Cryptopolitan previously reported that Anthropic raised $2 billion at a $60 billion valuation last year, followed by a massive $15 billion investment from Microsoft and Nvidia that pushed its worth to around $350 billion. Those investors now have billions riding on how AI gets regulated.
Deep ideological split drives spending war
The companyβs blog post Thursday took a veiled shot at OpenAI without naming them directly, warning that βvast resources have flowed to political organizations that opposeβ efforts to make AI safer.
Β If candidates backed by Public First Action win enough seats, they could block federal preemption bills in Congress. That would keep the state-by-state approach alive, at least temporarily.
The rivalry between Anthropic and OpenAI runs much deeper than just funding levels. Founded by siblings Dario and Daniela Amodei after they left OpenAI over safety concerns, Anthropic has built its entire identity around making AI technology less risky. OpenAI and its backers prefer lighter rules that let innovation move faster.
That philosophical gap now plays out in campaign contributions and lobbying. OpenAI asked Trump to block state AI rules in exchange for government access to its models earlier this year. The company argued that fragmented state laws would damage Americaβs AI leadership.
But the odds look tough. Leading the Futureβs six-to-one funding advantage gives OpenAIβs side more money to spend on ads, staff, and ground operations. Trumpβs executive order also hands federal agencies tools to challenge state laws immediately, without waiting for Congress.
The fight reveals a deeper split in Silicon Valley over how much oversight AI should face. Companies like Anthropic, founded by former OpenAI employees who left over safety disagreements, generally favor stronger rules to prevent AI from causing harm. OpenAI and its supporters prefer lighter regulation that lets innovation move faster.
Voters in states that passed AI laws will essentially get to choose which vision they prefer when they cast ballots this fall. Their decision could determine whether AI development happens under a patchwork of state rules or a uniform federal system with fewer restrictions.
Get seen where it counts. Advertise in Cryptopolitan Research and reach cryptoβs sharpest investors and builders.



















English (US)