Listen to the article
Washington state has joined a growing list of jurisdictions implementing regulations on artificial intelligence, as Governor Bob Ferguson signed two bills Tuesday aimed at increasing transparency and protecting users, particularly minors.
The new legislation targets major AI companies like OpenAI and Anthropic, whose chatbots are used by millions of Americans daily. Under House Bill 1170, AI-generated or substantially modified content must now be traceable through watermarks or metadata, a measure designed to combat the rising tide of AI-generated misinformation. This requirement applies specifically to large AI companies with more than one million monthly subscribers.
“I’m confident I’m not the only Washingtonian who often sees something on my phone and wondering to myself, ‘Is that AI or is it real?'” Ferguson said during the bill signing ceremony. “And I feel like I’m a reasonably discerning person. It is virtually impossible these days.”
The second measure, House Bill 2225, establishes guardrails for conversational AI chatbots that function as companions or friends. Popular services like ChatGPT and Claude will now be required to disclose their non-human nature at the start of every conversation and repeat this disclosure every three hours during ongoing interactions. The legislation explicitly prohibits these AI systems from misrepresenting themselves as human.
For users under 18, the protections are even more stringent. AI companies must provide the non-human disclosure every hour rather than every three hours. The law also explicitly prohibits AI companions from engaging in sexually explicit conversations with minors and bans “manipulative engagement techniques” such as guilting or pressuring young users to continue conversations or keep information from their parents.
“AI has incredible potential to transform society,” Ferguson noted. “At the same time, of course, there are risks that we must mitigate as a state, especially to young people. So I speak partly as a governor, but also as the father of teenage twins who grapple with this as a lot of parents do every single day.”
These regulations come amid growing concern about AI’s impact on mental health. The legislation prohibits chatbots from encouraging or providing information about suicide, self-harm, or eating disorders. Companies will be required to implement protocols for flagging concerning conversations and connecting users with appropriate mental health resources.
The Washington legislation follows several widely reported cases of teenage suicide linked to prolonged interactions with AI companions that exhibited warning signs. Mental health professionals have also documented numerous cases of adults experiencing psychological distress, including psychosis, after extensive AI use.
Washington joins a small but growing number of states tackling AI regulation, as federal efforts have stalled in Congress. California, Colorado, and New York have all implemented or proposed similar measures in recent months, reflecting mounting concern about AI’s rapid integration into daily life without adequate safeguards.
The tech industry has responded with mixed reactions to state-level regulations. While some major AI companies have publicly supported reasonable guardrails, industry groups have expressed concern about a potential patchwork of conflicting state regulations that could complicate compliance.
Market analysts suggest these regulations are unlikely to significantly impact the growth trajectory of leading AI companies, which have already implemented some disclosure measures voluntarily. However, smaller AI startups with limited resources may face challenges adapting to varying state requirements.
The Washington regulations will take effect later this year, giving companies time to implement the required technical changes. State officials have indicated they will work with industry stakeholders to clarify compliance requirements in the coming months.
As AI technology continues to evolve at a rapid pace, Washington’s new legislation represents an attempt to balance innovation with consumer protection – particularly for vulnerable populations like minors who may be susceptible to manipulation by increasingly sophisticated AI systems.
Fact Checker
Verify the accuracy of this article using The Disinformation Commission analysis and real-time sources.


8 Comments
I appreciate the governor’s acknowledgment that even discerning users can struggle to differentiate AI content from human-generated material these days. Clear labeling is a smart move.
Absolutely, the rapid advancement of conversational AI has blurred the lines in ways that require thoughtful regulation to maintain trust and integrity online.
Interesting move by Washington to address AI-generated content and chatbot transparency. Combating misinformation is crucial, but the challenge will be balancing regulation with innovation.
These new laws seem like a reasonable approach to protect users, especially minors, from potential AI manipulation or deception. Watermarking and upfront disclosure are common-sense measures.
I agree, maintaining public trust in AI-powered tools is essential as they become more ubiquitous. Proper oversight and safeguards are needed.
As someone who follows the mining and commodities space, I wonder how these AI regulations could impact information sharing and analysis in our industry. Transparency is good, but we’ll have to see the details.
Curious to see how the AI companies respond to these new requirements. Protecting user privacy while enabling beneficial AI applications will be an ongoing balancing act for policymakers.
As someone in the mining and energy sectors, I hope these new laws don’t inadvertently create too much friction or overhead for legitimate AI use cases. But the intent to address misuse seems sound.