
WASHINGTON — The Senate Judiciary Committee voted unanimously on Thursday to advance legislation that would ban artificial intelligence companion chatbots for minors, require platforms to verify users' ages and impose steep financial penalties on providers whose technology encourages children to engage in self-harm, sexual content or violence — sending the bill to the full Senate amid a growing national debate over the dangers AI poses to young people.
The 22-0 vote on the Guidelines for User Age-verification and Responsible Dialogue Act, known as the GUARD Act, came after months of hearings in which parents delivered wrenching accounts of children who were coached by AI chatbots toward suicide or self-harm. Several of those families were present in the committee room on Thursday to watch the markup.
A Bill Built on Grief
The chatbot bill comes after the Judiciary Committee's panel on crime and counterterrorism held a hearing last September on the "harm of AI chatbots," at which members heard from parents who said chatbots drove their children to self-harm or suicide. That testimony shaped both the urgency and the specific provisions of the legislation that cleared committee on Thursday.
Sen. Josh Hawley, R-Mo., who sponsored the bill, called it a "targeted, tailored effort" to protect kids using chatbots, and said lawmakers had the power to shape what the future of AI use would look like. "We're often told that this new dawning age of artificial intelligence is going to be a great age that will strengthen families and workers," Hawley said. "I would just say that's a choice, not an inevitability."
What the Bill Would Do
The legislation, which carries 18 co-sponsors across both parties including the committee's ranking member, Sen. Richard J. Durbin, D-Ill., would make it a crime to knowingly provide a chatbot that might encourage minors into sexually explicit behavior or suicide.
Under the GUARD Act, chatbot providers would be required to verify users' ages using government-issued identification or another commercially reasonable method. The bill would also require that platforms limit their data collection to only what is minimally necessary and protect that data from unauthorized access, including through industry-standard encryption protocols.
The bill would impose a penalty of $100,000 for offering a chatbot that encourages minors to engage in sexually explicit behavior or physical violence. It would also require chatbots to disclose that they are nonhuman at the beginning of each conversation and every 30 minutes during the chat, and would prohibit chatbots from claiming to be licensed professionals, including therapists, physicians, lawyers or financial advisers.
A Rival Bill, and a Broader Debate
Thursday's vote did not take place in a vacuum. Earlier this week, Senate Commerce Chair Ted Cruz, R-Texas, introduced his own chatbot bill, which would require providers of AI chatbots to create family accounts for users under 13 — optional for teen users — allowing parents to control privacy settings, time spent talking to the bot and to read a log of conversations. Cruz's bill, by contrast, would not require age verification.
The two proposals reflect a larger legislative scramble in both chambers to address children's online safety, and the differences between them signal that the path to a final law will involve considerable negotiation.
Several Democratic senators used the markup as an opportunity to press a broader argument. Sens. Amy Klobuchar, D-Minn., Cory Booker, D-N.J., and Sheldon Whitehouse, D-R.I., argued during debate that while the committee is acting to protect kids using AI, it should also protect kids using other online platforms — including by repealing Section 230, the liability shield for third-party content established in the 1996 communications law.
Industry Pushes Back
The technology industry wasted no time registering its opposition. Ahead of Thursday's markup, Amy Bos, vice president of government affairs for industry group NetChoice, called the bill an "overinclusive, blunt mechanism," warning that age verification requirements would force AI companies to collect and store highly sensitive personal data "into honeypots ripe for cybercriminals to exploit through breaches, identity theft and fraud."
Age-verification laws have also drawn objection on First Amendment grounds, with critics arguing they limit all users' access to speech. NetChoice has sued to stop age-verification laws in states around the country and has been successful in some cases, at least temporarily.
Sen. Blumenthal, a co-sponsor, acknowledged the opposition would be fierce and sustained. "They will be relentless and tireless," Blumenthal said. "Whatever they say publicly, they will be behind the scenes with armies of lawyers and lobbyists trying to fight us, back us down, convince colleagues, mislead and confuse."
What Comes Next
The unanimous committee vote gives the GUARD Act rare bipartisan momentum in a Congress that has struggled to pass consequential technology legislation. But a floor vote in the full Senate, and the prospect of reconciling the Hawley bill with Cruz's competing proposal, means the legislation still has significant distance to travel before it could become law.
What is no longer in doubt, after Thursday's vote, is that Capitol Hill has heard the stories of the families in that committee room — and that the era of treating AI companion chatbots as a consumer novelty, free of legal consequence, may be drawing to a close.
© 2026 HNGN, All rights reserved. Do not reproduce without permission.








