Listen to the article
AI Regulation in America: A Work in Progress
Artificial intelligence is advancing rapidly, but U.S. lawmakers are still working to catch up with how to regulate it.
Currently, no single, comprehensive federal law governs AI in the United States. Instead, policymakers are beginning to coalesce around a broader framework that could shape future legislation.
A significant recent development came from the White House, which released a National AI Legislative Framework in March. The proposal urges Congress to adopt a unified federal approach, aiming to prevent a fragmented system of state-by-state regulations and establish the foundation for what could become the first major federal AI law.
The framework outlines several key priorities, including protecting children online, addressing intellectual property concerns, preparing the workforce for AI disruption, and managing national security risks. Notably, the proposal favors a “light-touch” regulatory approach, leveraging existing federal agencies rather than creating a new, centralized AI regulator.
Despite the absence of a comprehensive law, AI is not operating in a regulatory vacuum. The Government Accountability Office (GAO) has identified dozens of AI-related policies already in place across federal agencies. A 2025 GAO report found 94 AI-related requirements with government-wide implications, reflecting a complex and fragmented oversight system.
However, significant gaps remain. A 2026 follow-up report by the GAO highlighted deficiencies, particularly regarding how agencies procure and use AI, and how these systems are held accountable. The report concluded that federal oversight must evolve as AI adoption continues to expand across both public and private sectors.
While today’s debate over AI regulation may feel unprecedented, history suggests otherwise. According to the Congressional Research Service, the U.S. has typically not created single agencies to regulate new technologies. Instead, existing laws and regulators are often adapted over time to address emerging challenges.
The internet provides one of the closest modern parallels to AI’s regulatory journey. Governance of the internet evolved away from strict government regulation rather than toward it—a model widely credited with enabling the rapid innovation and growth that defined the digital revolution.
Earlier transformative technologies followed different regulatory paths. In the late 1800s, railroad companies consolidated enormous power over transportation and pricing. According to the Library of Congress, mounting public pressure eventually led to federal intervention through the Interstate Commerce Act—marking the first major federal regulation of a private industry.
In telecommunications, the U.S. Department of Justice pursued antitrust action against AT&T, resulting in its breakup in 1984. This landmark case is widely regarded as having restored competition in the telecommunications market after decades of monopolistic control.
These historical examples highlight a consistent pattern: transformative technologies are typically shaped by private innovation first, with regulatory frameworks following later as the societal impacts become clearer.
That dynamic is now playing out again with artificial intelligence. One of the central concerns today mirrors past industries: market concentration. A relatively small number of companies control key AI infrastructure, including advanced semiconductor chips and cloud computing resources, raising important questions about competition, access, and long-term oversight.
As AI continues to transform industries ranging from healthcare to finance to transportation, the pressure for meaningful regulation will likely intensify. The challenge for policymakers will be balancing innovation and growth against legitimate concerns about safety, privacy, and equity—all while navigating a rapidly evolving technological landscape.
Whether the U.S. ultimately opts for comprehensive AI legislation or continues with its current piecemeal approach remains to be seen. What is clear is that the decisions made in the next few years will significantly shape how artificial intelligence develops and who benefits from its continued advancement.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


23 Comments
Uranium names keep pushing higher—supply still tight into 2026.
Good point. Watching costs and grades closely.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Uranium names keep pushing higher—supply still tight into 2026.
Nice to see insider buying—usually a good signal in this space.
Good point. Watching costs and grades closely.
Exploration results look promising, but permitting will be the key risk.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Good point. Watching costs and grades closely.
Production mix shifting toward Fact Check might help margins if metals stay firm.
The cost guidance is better than expected. If they deliver, the stock could rerate.
Good point. Watching costs and grades closely.
If AISC keeps dropping, this becomes investable for me.
Good point. Watching costs and grades closely.
Good point. Watching costs and grades closely.
Production mix shifting toward Fact Check might help margins if metals stay firm.
Silver leverage is strong here; beta cuts both ways though.
Good point. Watching costs and grades closely.