Public Comments for 03/02/2026 Communications, Technology and Innovation
SB269 - Mental health service providers; definitions, use of artificial intelligence system civil penalty.
No Comments Available
SB796 - Artificial Intelligence Companion Chatbots and Minors Act; established, enforcement, civil penalty.
Last Name: Edmund Organization: Family Online Safety Institute Locality: Washington, DC

My name is Marissa Edmund and I am the State Policy Lead for the Family Online Safety Institute (FOSI). FOSI is an international, non-profit organization whose mission is to make the online world safer for children and their families. AI Chatbots: Chatbots are a functional tool for young people to engage with, with the proper guardrails. A variety of chatbots exist that need different considerations, especially for minors. When regulating these tools, it is important to ensure the actual harmful feature is being targeted. SB 796 includes some positive provisions: Disclosure and transparency are good starting point protections for interacting with AI chatbots. How was the frequency decided? California’s law is every 3 hours - it may be worth considering harmonization across states. We also support the additional disclosure and notification requirement to prevent advising users from the perspective of a licensed expert, such as in the medical, financial, or legal fields. Timeliness is important. It makes sense to have incidents be reported to emergency services within 24 hours, and for platforms to provide crisis messages and crisis services information directly to the user who is at risk. If a user suggests they may be in immediate danger or is having suicidal ideation or engaging in self harm, directing the user to emergency services is a practical and necessary step to ensure people, especially minors, receive immediate assistance from appropriate professionals. Unintended consequences and important questions to consider: A question for the Committee: is this bill intended to offer additional or separate protections for minors? It is our understanding that this bill would apply to users of all ages, and utilize no age assurance mechanism to determine the age of users. That is not necessarily a problem, but we wanted to clarify that was the goal of the Committee with this bill. If this bill does treat users of all ages the same, that would complicate the provision to report incidents to emergency services within 24 hours, presenting additional risk to marginalized groups of people, such as survivors of domestic violence and LGBTQ+ youth. In certain cases, providing resources to the domestic violence hotlines and mental health services could be a better intervention than reporting to emergency responders directly. Another question arises when examining the definition of a chatbot. Would this cover smart speakers and devices without screens? What about smart TVs, or AI-enabled gaming systems? Each of these represents the need for careful consideration to minimize unintended consequences. A broad definition could cause complications in the disclosure and notification section, as well. For example, a prompt that discloses the user is not speaking with a human at a thirty-minute interval may work on a website or app, but may not be functional nor feasible for a smart speaker or physical toy. Lastly, since Virginia has already enacted a comprehensive data privacy law, we wanted to ensure that this bill would not conflict with existing data privacy protections. Specifically, that the provision where platforms must determine if a user is experiencing emotional dependence on a chatbot would not require additional data collection, processing, or increased surveillance. Additional compliance guidance around this section, possibly from the Attorney General’s office, would be clarifying.

Last Name: Mitchell Organization: Mothers Against Media Addiction (MAMA) Locality: Washington, DC

I am Elizabeth Mitchell and I am here today to support bill SB796 in my role as Senior Policy Director at Mothers Against Media Addiction (MAMA). MAMA is a grassroots movement of parents & allies fighting back against media addiction & creating a world where real life experiences and interactions remain at the heart of a healthy childhood. Since the advent of digital technology and the Internet, we have been deploying technology at a massive scale, lured by shiny promises from tech companies — without fully considering these products’ potential harms. Social media platforms have, for years now, been amplifying harmful content such as self- harm, eating disorders, hate speech, racism and unhealthy beauty standards. Across Virginia, & the whole nation, parents today are living with the aftermath of having widely and rapidly adopted those products, & given them to our kids — without our lawmakers first making sure they were safe. We allowed the proliferation of smartphones & social media without proper safeguards in place & now find ourselves in the midst of a national emergency in youth mental health-elevated rates of youth anxiety, depression, self-harm, suicide, eating disorders & more. Additionally, attention spans are falling and reading and math scores are going down. These companies' own internal documents show those problems did not happen by chance. They were the result of intentional data practices and algorithmic design choices selected by humans, to maximize profit. The risks are so great that many of the folks involved with designing and building A.I. have been issuing warnings about its danger. We are already seeing the harms to children from A.I., and we parents need help because the problem is at the product design level. ChatGPT and other AI products have only been in the public consciousness for about two years but we already see some of the ways it harms kids: Children’s learning is being compromised. From grade school to universities, GenAI applications are being used to cheat on homework and tests. Social media companies have begun directly integrating A.I. chat bots to promote increased, personalized engagement— which capitalizes on kids’ vulnerability and search for companionship, and allows the product to become even more targeted and harmful. Reality is being blurred. Adults are having difficulty discerning whether news, advertisements and even correspondence is real or fake. Imaginary friend chat bots and other role playing bot apps are being marketed to kids with little distinction that these entities are not real. Dozens of AI mental health chat bots are being marketed as “therapy” despite the bots being unlicensed to provide advice. Kids are exposed to harmful content. Chat bots can expose kids to misinformation and/or hardcore pornography, or promote dangerous behavior A.I. is aiding the creation of Child Sexual Abuse Material (CSAM). Nudification apps, where users input photos of real people and A.I. returns deep fake photos in which the subject of the photo then appears nude. There have been numerous cases in the past year alone of teens & students using these apps to produce sexually explicit images of celebrities, but also their peers — and these apps are marketed towards kids Please pass this bill & choose to protect our children online and IRL.

Last Name: Edmund Organization: Family Online Safety Institute Locality: Washington, DC

VA GA House Committee on Communications, Technology and Innovation AI Chatbots and Minors Act (SB 796) Testimony of Marissa Edmund State Policy Lead, Family Online Safety Institute March 2, 2026 My name is Marissa Edmund and I am the State Policy Lead for the Family Online Safety Institute (FOSI). FOSI is an international, non-profit organization whose mission is to make the online world safer for children and their families. AI Chatbots: Chatbots are a functional tool for young people to engage with, with the proper guardrails. A variety of chatbots exist that need different considerations, especially for minors. When regulating these tools, it is important to ensure the actual harmful feature is being targeted. SB 796 includes some positive provisions: Disclosure and transparency are good starting point protections for interacting with AI chatbots. How was the frequency decided? California’s law is every 3 hours - it may be worth considering harmonization across states. We also support the additional disclosure and notification requirement to prevent advising users from the perspective of a licensed expert, such as in the medical, financial, or legal fields. Timeliness is important. It makes sense to have incidents be reported to emergency services within 24 hours, and for platforms to provide crisis messages and crisis services information directly to the user who is at risk. If a user suggests they may be in immediate danger or is having suicidal ideation or engaging in self harm, directing the user to emergency services is a practical and necessary step to ensure people, especially minors, receive immediate assistance from appropriate professionals. Unintended consequences and important questions to consider: A question for the Committee: is this bill intended to offer additional or separate protections for minors? It is our understanding that this bill would apply to users of all ages, and utilize no age assurance mechanism to determine the age of users. That is not necessarily a problem, but we wanted to clarify that was the goal of the Committee with this bill. If this bill does treat users of all ages the same, that would complicate the provision to report incidents to emergency services within 24 hours, presenting additional risk to marginalized groups of people, such as survivors of domestic violence and LGBTQ+ youth. In certain cases, providing resources to the domestic violence hotlines and mental health services could be a better intervention than reporting to emergency responders directly. Another question arises when examining the definition of a chatbot. Would this cover smart speakers and devices without screens? What about smart TVs, or AI-enabled gaming systems? Each of these represents the need for careful consideration to minimize unintended consequences. A broad definition could cause complications in the disclosure and notification section, as well. For example, a prompt that discloses the user is not speaking with a human at a thirty-minute interval may work on a website or app, but may not be functional nor feasible for a smart speaker or physical toy.

Last Name: Mann Organization: Computer and Communications Industry Association Locality: Washington, DC

The Computer & Communications Industry Association appreciates the Virginia General Assembly’s commitment to protecting minors from potential harm. However, SB 796’s broadened definition of “companion chatbot” would impose sweeping, privacy-invasive compliance obligations on virtually every AI assistant used in the Commonwealth, from customer-service agents and educational tools to general-purpose productivity applications relied upon by millions of Virginians. Sufficient time and stakeholder engagement are essential to get this right. First, the bill’s mandates to detect “emotional dependence” or “acute mental health crisis” would require operators to build extensive psychological profiling systems. Meeting these obligations would necessitate continuous monitoring of conversation patterns and tone, indefinite data retention for auditing, sharing of private dialogues with third-party vendors, and warrantless disclosure of user conversations to law enforcement within 24 hours. This framework is directly at odds with longstanding data minimization privacy principles. Second, the legislation creates a fundamental inconsistency. While it explicitly acknowledges that AI outputs cannot be fully predetermined by developers, it nevertheless imposes strict liability for failing to control those unpredictable responses, with penalties of $50,000 per day per violation. No operator can reliably meet such a standard, and the resulting uncertainty will chill innovation and deter beneficial AI deployment in Virginia. Third, the expanded scope far exceeds the bill’s original focus on companion chatbots. The revised definition now captures countless everyday applications across the technology ecosystem, creating a mismatch that will generate widespread unintended consequences for Virginia’s digital economy. Finally, the liability regime is among the most severe in state AI policy: daily per-violation fines, unlimited actual and punitive damages through a private right of action, mandatory attorney fees, and liability triggered by technical violations regardless of harm. When paired with subjective compliance standards, this structure creates disproportionate risk for companies of all sizes. CCIA strongly supports protecting minors and vulnerable individuals. SB 796 as drafted, however, does not achieve that objective in a workable or privacy-respecting manner. For these reasons we respectfully oppose this bill. CCIA and its members stand ready to collaborate with the General Assembly to develop balanced, effective protections that safeguard users without undermining innovation or trust. CCIA is available as a resource to the legislature to help get this right as it continues to think through these issues.Thank you for your consideration. Tom Mann Computer and Communications Industry Association State Policy Manager, South

Last Name: Wilson Locality: Southfield, MI

Chair Hayes, I appreciate the opportunity to provide constructive feedback on SB796 this morning on behalf of the Software & Information Industry Association (SIIA). We are the principal trade association for companies in the business of information, including its aggregation, dissemination, and productive use. While SB796 is well-intentioned, we are concerned that several provisions, as currently drafted, will compromise Virginians’ privacy, limit access to technologies, and misapply strict liability with severe penalties. The bill would require AI chatbot operators to be able to identify and mitigate an “emotional dependence” by a user or the signs of an “acute mental health crisis.” These are complex, subjective standards that are challenging for a mental health professional to determine and even more difficult for an AI to assess with a degree of accuracy. In order to comply with these standards, operators will have to build a robust psychological profile of every user. We appreciate the Commonwealth’s current approach to privacy, however, this would be the antithesis of that standard. In order to meet these standards, an operator would have to continuously monitor and profile each user’s psychological state by doing such intrusive actions as tracking users’ conversation patterns, emotional tone, and behavioral indicators over time. This creates numerous risks for users. There is cybersecurity risk associated with storing vast amounts of sensitive data beyond what is essential to provide services. The cybersecurity risk is augmented here because the bill would require operators to use sensitive user data to create psychological profiles of users, and in certain cases requiring operators to send those sensitive data and profiles to another third party – law enforcement. There is also the risk that the ability of law enforcement to gain access to this information - even without warrants would fall disproportionately on vulnerable populations. These requirements would turn AI assistants into state-mandated psychological surveillance tools. We urge the Committee and the Senate to replace the vague detection standards and replace them more technically feasible, privacy-respecting safety protocols. Additionally, SIIA has recommendations for the definition of companion chatbot as it captures many technologies that do not operate as companion chatbot. We request that bill scope be tailored specifically to companion chatbots as opposed to all AI assistants, and specifically include an exemption for education software. Furthermore, SIIA is concerned about application of strict liability to an AI system, which as the bill notes is one with outputs “not fully predetermined by the developer.” The operator should not be held strictly liable, if even the bill acknowledges the lack of predictability in AI systems and assistants. For example, the no-causation standard goes beyond any existing consumer product regulation and exposes AI system operators to existential financial risk for technical noncompliance, based on overly broad definitions and vague compliance standards. We ask that liability be tied to actual harm rather than technical violations. SIIA appreciates the work of the Commonwealth on protecting children online and recognizes this is a complex, rapidly evolving topic. As such, we suggest pausing consideration of the legislation now for careful consideration during the interim. Abigail Wilson Director, State Policy

Last Name: January Organization: Chamber of Progress Locality: McLean

See attached.

End of Comments