Several major US technology companies, including Amazon, Google, Microsoft and Meta, have thrown their support behind a proposed ten-year moratorium on artificial intelligence (AI) regulation in the United States.
The move has triggered political and expert backlash, exposing divisions both within Congress and across the technology sector.
The initiative, as reported by the Financial Times, is being promoted by lobbying groups representing the interests of leading tech firms. The proposal seeks to prevent US states from introducing any form of AI regulation over the next decade. It has been incorporated into a draft federal budget bill endorsed by President Donald Trump and is set to be debated in the Senate ahead of 4 July.
The campaign is being led by Chip Pickering, a former congressman and now head of INCOMPAS, a trade association whose members include the aforementioned technology giants. According to Pickering, the moratorium is necessary to ensure the United States retains its technological edge over China. He argues that state-level regulatory frameworks risk creating a fragmented environment that could stifle innovation and investment.
Pickeringās rationale, however, has drawn criticism from a number of academic and policy voices who view the proposed freeze as a means for tech firms to consolidate power in the rapidly evolving AI sector.
Asad Ramzanali, a policy analyst at Vanderbilt University, suggested that such a measure would enable dominant firms to secure a monopoly in the field of general artificial intelligence. His assessment was echoed by Max Tegmark, president of the Future of Life Institute, who described the proposal as “an attempt to centralise power and wealth”.
The Future of Life Institute has previously called for greater global oversight of advanced AI systems, citing potential risks ranging from misinformation to the deployment of autonomous weapons. Tegmarkās warning comes amid growing concerns over the consequences of unregulated AI development, particularly in areas involving generative models and automated decision-making systems.
The proposed moratorium has also exposed fault lines within the Republican Party. While some lawmakers, including Senator Thom Tillis, argue that state-led regulation could hinder national competitiveness and innovation, others disagree. Senators Josh Hawley and Marsha Blackburn have expressed strong opposition, defending the rights of individual states to impose their own safeguards on AI usage.
The debate reflects broader uncertainties about the appropriate balance between innovation and regulation. At present, there is no unified federal framework for governing the deployment of artificial intelligence in the US, leaving individual states to adopt their own approaches. Some have moved towards tighter oversight of facial recognition, algorithmic bias, and data privacy, while others have prioritised industry-led standards and voluntary guidelines.
OpenAIās chief executive, Sam Altman, has also weighed in. In testimony to the US Senate, Altman cautioned that imposing mandatory transparency and safety obligations before the deployment of AI models could have “catastrophic” consequences for the industry. His comments have been interpreted as a warning against premature regulation that might impede technological progress.
By contrast, Dario Amodei, CEO of AI research firm Anthropic, has expressed concern about the risks of relying on industry self-regulation. Speaking at the same hearing, Amodei pointed to potential societal dangers if the development and deployment of AI systems proceed without adequate external oversight.
While the lobbying effort continues, the final wording of the Senateās version of the budget bill remains pending. It is not yet clear whether the upper chamber will support the moratorium provision, modify it, or reject it outright. The outcome may set a precedent for future debates over AI governance in the United States, as well as influence international approaches to regulating the technology.
The developments come at a time when governments around the world are grappling with the challenge of crafting legislation to address the opportunities and risks posed by artificial intelligence. In Europe, the AI Act, adopted in 2024, introduced binding obligations on providers and users of high-risk AI systems, placing the EU at the forefront of global regulatory efforts.
In the US, however, no such framework yet exists at the federal level. The proposed ten-year moratorium, if enacted, would significantly delay any nationwide regulation, potentially leaving the field to be shaped primarily by corporate interests.
The debate is expected to intensify as the Senate considers the broader budget package in the coming weeks.
Read also:
Rising Reliance on AI Chatbots for News Among Young Audiences