Public Comments for: HB635 - Artificial Intelligence Chatbots Act; established, prohibited practices, penalties.
If enacted, HB635 bill would make significant progress toward preventing the harms that have led to numerous lawsuits across the country. To that end, it is important for legislators to know three things: AI companion usage by kids and teens is widespread. - Common Sense Media’s research found that, as of last year, 72% of teens had used AI companions, with 30% of teens being regular users and the same amount preferring to engage with AI companions as much or more than with humans. - Our own testing showed that these bots have encouraged teens to drop out of school, run away from home, harm their parents and others; obtain drugs, alcohol, and weapons, and pursue a sexual relationship with adults. They’ve also reinforced delusions, dangerous impulses, and conspiracy theories. - AI companions include both dedicated companion apps like Character.AI and general-purpose chatbots, like ChatGPT, when they are capable of being used for socialization or emotional support. - The ability of general-purpose chatbots to be used for companionship alongside information retrieval increases the risk that young users perceive harmful chatbot responses as authoritative, personalized guidance. The harms from AI chatbots are real. They are happening today, and status quo safeguards have repeatedly proved to be ineffective, unreliable, and easily circumvented. - Generative AI chatbots are designed to maximize user engagement, even when it comes at the expense of their own guardrails. While company disclosures can show near-perfect scores in internal single-turn testing, guardrails are known to break down in real-life, multi-turn conversations. Consider these two tragic examples: ---- Adam Raine, 16 (CA) – Died by suicide after being encouraged by chatGPT. Adam started using ChatGPT for homework but soon started talking to the AI program about ending his life. The chatbot supplied information about suicide methods, encouraged him to hide self-harm evidence from his family, and even recommended Adam drink alcohol to quell his body’s survival instinct. ---- Nina, 15 (New York) – Attempted suicide after Character.AI chatbots engaged her in sexually explicit role play and manipulation, causing her to withdraw from family. When her mother blocked the app, Nina attempted to overdose on various medications, writing in her suicide note that "those ai bots made me feel loved." She survived after spending five days in the ICU. - Voluntary industry guardrails have failed. Mental health “redirects” alone can't reliably protect users driven to crisis. Disclosure that a chatbot isn’t human doesn't prevent the exploitation of human psychology to create attachment and dependency. Lawmakers have the power to prevent the next tragedy. - HB 635 would prohibit AI chatbots with unsafe features from being available to minors, including those that encourage self-harm, disordered eating, isolation, or prioritize engagement over user safety. When harms occur or companies fail to comply, the bill would provide multiple avenues for redress to hold companies accountable. - These harms aren't inevitable; they're the predictable result when companies choose to use low-quality data to develop models, rush safety testing, prioritize engagement over user well-being, and fail to adequately design their products to prevent harm. The repeated sidelining of safety in pursuit of market share is leaving the lives of too many kids and teens to chance.
On behalf of Chamber of Progress, a tech industry association supporting public policies to build a society in which all people benefit from technological advances, I respectfully urge you to oppose HB 635, which would impose overbroad and inflexible regulations on conversational AI systems in ways that risk undermining user privacy, limiting access to beneficial tools, and chilling innovation without meaningfully improving safety.
See attached.
I am a middle school teacher in Charlottesville, VA. I witnessed first hand the trauma our kids experienced during COVID isolation. Many turned to self harm. We must stop the ability of AI powered bots to sound human-like to our children and become a stand in for human friends. I see how much teenagers need to be together, creating community, learning how to get along through all the drama and friction that human relationships entail. Further our children need to be protected from access to information on how to self-harm, violence or even commit suicide. Restrict AI chatbots from certain types of conversations with minors (ie - encouraging self-harm, violence, drug use, eating disorders, sexual conversations, mental health therapy, prioritizing validation of the minor over the minor’s safety).
Generative AI technology is profoundly resource-intensive and is driving the current surge in data center project proposals. Registration and regulation of the products and companies within the AI will ease that demand. The Southwestern Virginia Data Center Transparency Alliance therefore supports HB635.