Listen to the article

0:00
0:00

It was the mid-1990s, and the world was discovering the internet. But this wasn’t the era of endless Instagram scrolling or X feeds—people navigated through GeoCities, conducted searches on Hotbot before Google existed, and consulted Ask Jeeves long before AI assistants like Grok emerged on the scene.

During this pivotal time, Congress stood on the verge of enacting landmark legislation that would shape the digital landscape for decades to come. When President Clinton signed the Telecommunications Act of 1996, he celebrated how the measure would create “a superhighway to serve both the private sector and the public interest.”

The 1990s exuded optimism. America had emerged victorious from the Cold War, the economy was booming, and the internet promised to connect the world in unprecedented ways. However, beneath this optimism lay serious concerns about free speech and content regulation. Lawmakers debated whether the Federal Communications Commission should regulate internet content as it did television and radio broadcasts.

Security concerns also loomed large. In the early ’90s, the National Security Agency had implemented the “clipper chip”—a cryptographic backdoor for intercepting phone communications—raising fears about government surveillance extending into the online sphere.

Ultimately, Congress opted to grant the internet significant freedom in the interest of preserving free speech. Telecommunications companies successfully lobbied for legal protection, arguing that “carriers” shouldn’t be held responsible for questionable content posted by “customers.”

“We said that the FCC would not regulate either the content or the character of the internet,” explained then-Representative Chris Cox (R-Calif.) during a 1995 floor debate. “We can’t have the government in the interest of uniformity coming up with standards to regulate this industry.”

Cox partnered with then-Representative (now Senator) Ron Wyden (D-Ore.) to craft a crucial component of the 1996 law. Wyden, while describing the internet as “the shining star of the information age,” acknowledged concerns about inappropriate content. “My wife and I have seen our kids find their way into these chat rooms which make their middle age parents cringe,” he admitted.

Nevertheless, both lawmakers feared that “censorship could really spoil much of its promise.” Their solution was Section 230, a provision that shields internet companies from lawsuits and criminal charges based on user-posted content.

Representative Jay Obernolte (R-Calif.) explained the logic behind Section 230 using an analogy: “If you, as a public service, put up a billboard in a hall and someone puts something on the billboard that says, ‘Congressman Obernolte beats his wife,’ the owner of the billboard is not responsible for the content of that message.”

Today, nearly three decades later, many legislators are calling for fundamental changes to Section 230, concerned about the unintended consequences of this legal immunity.

“Section 230 is absolute liability protection, immunity for the largest social media companies in the world. It’s driving people to suicide. It is ruining our society,” argued Senator Lindsey Graham (R-S.C.), one of the most vocal advocates for reform. “If you buy a bad car, you can sue. Every product you buy, the company has to stand behind it. This is the only area of the law I know where the largest companies in the world have absolute legal immunity.”

Graham has gone so far as to compare social media’s dangers to those of alcohol consumption, while Senator Richard Blumenthal (D-Conn.) criticizes platforms for “putting profits over people” and “destroying the lives of young people by driving toxic content at them through its algorithms.”

This bipartisan frustration stems partly from Congress’s own decisions three decades ago. “As long as these companies believe they’re immune from liability, they’re going to tell all of us to go to hell,” Graham contends.

Senator Josh Hawley (R-Mo.) suggests beginning reform efforts by “allowing victims of child porn and other child abuse material and sexual abuse material to sue these companies.”

The original vision for Section 230 assumed that enhanced opportunities for free expression would allow the internet to flourish, with the market creating a rich online environment without excessive regulation. “Government is going to get out of the way and let parents and individuals control it rather than government doing that job for us,” Cox stated in 1995.

However, concerns about harmful content and the addictive nature of digital platforms have undermined that vision. “You talk to people and they’re scared to death of social media. They’re scared to death of AI,” noted Senator Rick Scott (R-Fla.).

Some lawmakers distinguish between human and algorithmic control of content. Representative Ro Khanna (D-Calif.) observed, “The First Amendment doesn’t protect an algorithm,” suggesting that technology-driven content decisions deserve different treatment than human editorial choices.

Senator Wyden, meanwhile, continues to defend Section 230, arguing that the hands-off approach enabled innovations like Wikipedia and newer platforms like Bluesky. He remains steadfast: “To get rid of (Section) 230, you’re going to have to roll over me.”

As 2026 unfolds, society continues grappling with technology’s implications—from phone addiction to concerns about children’s development. The digital optimism of the mid-1990s has faded, leaving many nostalgic for the days of dial-up modems and the simple joy of hearing “you’ve got mail.”

Fact Checker

Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.

6 Comments

  1. Jennifer Johnson on

    The Telecommunications Act of 1996 really set the foundation for the digital landscape we have today. It’s interesting to see lawmakers revisiting those rules decades later as the challenges have evolved. I wonder what kind of new frameworks they’ll propose.

  2. Mary Jackson on

    This is an important issue that touches on free speech, liability, and the role of tech companies in modern society. I’ll be curious to see what specific proposals emerge from this push to reform the legal shield for social media platforms.

  3. Olivia Thompson on

    Content regulation on social media is a thorny issue with no easy answers. I’m glad to see lawmakers taking a bipartisan approach to tackle it, but I imagine there will be a lot of debate around how to strike the right balance.

  4. Jennifer Smith on

    Looks like this legislation aims to hold social media companies more accountable for harmful content on their platforms. It’s a complex issue with valid concerns on both sides, but I’m curious to see how this plays out and what the broader implications could be.

    • Robert N. Johnson on

      Agreed, it’s a tricky balance between protecting free speech and addressing real harms. I’ll be following this story closely to see what kind of policy solutions emerge.

  5. Elijah Miller on

    Interesting to see bipartisan lawmakers taking on the legal shields protecting social media platforms. These companies have had a lot of power over online discourse, but it’s time to reexamine those rules in light of modern challenges like misinformation and content moderation.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2026 Disinformation Commission LLC. All rights reserved.