To win the AI competition with China, the United States must enact unified standards and credible enforcement that both enable innovation and build trust, at home and abroad. It must win by doing what no authoritarian regime can: harnessing openness, innovation, accountability, and freedom to create a better system that others want to join. Unlike past technologies, AI is not value-neutral. This race is not only about whose machines are faster. It’s about whose values are embedded in the code.
By Jianli Yang
The removal of the 10-year moratorium on state-level AI regulations from the Senate version of the so-called “Big Beautiful” budget bill does not end the intense national debate it ignited. Nor should it derail the essential federal dual mandate of both promoting and regulating AI development in the United States. Much of the debate centers around the future of American AI innovation and the country’s position in the global AI competition with China. It is important to recognize that concerns about domestic innovation would be far less urgent without the geopolitical rivalry, precisely the arena where state-level policies are inherently limited and where federal leadership must take precedence.
Innovation Needs Freedom—But Also Guardrails
Few would dispute that one of the core lessons of industrial history is that light-touch regulation tends to foster innovation. AI should not be the exception. Especially given how rapidly China is closing the AI gap with the US, and how central AI is to Beijing’s broader ambition to surpass the United States as the global geopolitical leader, America must ensure that its innovation ecosystem remains dynamic and unimpeded.
While many state regulations tilt heavily toward restriction, federal policy should strike a balance between oversight and encouragement, establishing and enforcing baseline consensus standards while providing the freedom and incentives needed for breakthrough research. At the same time, it should coordinate state-level efforts to support that goal, rather than stifle them.
AI Is Not Value-Neutral: Risks Require Responsible Oversight
The conclusion that light-touch regulation fosters innovation is not absolute, especially in the case of AI and in the context of US–China competition, where historical analogies fall short. Unlike past technologies, AI is not necessarily value-neutral. Every atomic bomb was value-neutral until it was deployed. In contrast, many AI models embed biases and assumptions from the moment they are trained.
The rapid evolution of AI has created legitimate fears and uncertainties. These fears are not irrational. AI systems affect human safety, privacy, and the structure of society at every stage—from development to deployment to daily use. Algorithms, often operating as black boxes, can unintentionally cause devastating harm or be designed by malicious actors to do so deliberately. As such, every responsible nation must regulate AI. Left unchecked, AI could destabilize the world order.
The US Cannot Win the AI Race Without Trust
Yes, the US is in a real race with China—a sprint toward technological supremacy. But winning that race requires more than speed. It requires trust, leadership, and values. If the US wants to preserve its role as the chief architect and steward of global order in the AI era, it cannot allow its AI sector to become a lawless frontier. Smart regulation—federal, clear, and principled—can enhance rather than hinder the US position.
A national regulatory framework can set essential safety and privacy standards without smothering innovation. The goal should not be to restrict but to enable: to provide the minimum constraints necessary to build systems the world can trust.
Why is trust so central? Because in AI, success isn’t just about who can build the fastest model—it’s about who can earn global adoption. People will use AI systems they trust. And trust must be earned through transparency, accountability, and good governance. Without it, even the most powerful systems will falter in the marketplace.
This article first appeared in National Interest on July 2, 2025