A new California Bill (SB 243) wants to introduce safety measures for children interacting with AI chatbots. Proposed by Senator Steve Padila, AI companies will be required to regularly remind young users for law that there are no chatbots in an attempt to curb “drug addiction, isolated and impressive aspects” of artificial intelligence. .
The bill aims to prevent AI developers from planning “addictive engagement patterns” that can potentially damage minors. Additionally, companies would need to submit an annual report to the state department of health care services, where their AI detected suicidal ideas in children or brought related topics. AI platforms will also be required to warn users that their chatbots may not be suitable for some children.
The law follows growing concerns about the psychological impact of AI chatbots on young users. Last year, a parents filed a wrong death case against the character. Another lawsuit accused the company of highlighting adolescents. In response, Character.ai has introduced the control of parents and developed a special AI model to filter sensitive subjects for young users.
Senator Padila said, “Our children are not laboratory rats for tech companies to use their mental health.” “We need general knowledge safety for chatbot users to prevent developers from employing strategies that they consider to be addictive and hunter.”
As governments increase efforts to regulate social media platforms, AI Chatbott may soon face a similar investigation. If passed, this California bill can set an example for future AI rules aimed at saving children online.
Thanks for reading..