California wants AI chatbots to remind consumers that they are not people

Rate this post


Even if the chatbots successfully pass the Turing test, they will have to give up the game if they work in California. New bill suggested California Senator Steve Padila will require chatbots that interact with children to offer random reminders that they are actually a machine, not a real person.

The bill, Sb 243It has been introduced as part of the efforts to regulate the precautionary measures that companies that operate chatbots must introduce to protect children. Among the requirements that the bill will establish: it will prohibit companies from “providing” consumers from increasing engagement or use, to require companies to report to the State Department of Health Services how often minors show signs of suicidal idea and provide periodic periodic reminders that chatbots are generated by AI, not human.

This last bit is especially German at the moment, as children are shown as quite vulnerable to these systems. Last year 14-year-old Tragically took his life After developing an emotional connection to a chatbot accessible in nature.AI, a chatbot service, modeled after various pop culture characters. The child’s parents have trial During the death, accusing the platform of being “unjustifiably dangerous” and without sufficient safety railings, although they were launched on the children’s market.

Researchers at the University of Cambridge have found The fact that children are more likely than adults to view AI chatbots as reliable, even to view them as quasi-human. This can put to children at considerable risk when chatbots respond to encouraging without any protection. Here’s how, for example, researchers managed to get Snapchat’s built -in AI Provide instructions to a hypothetical 13-year-old user How to lie to their parents to meet with a 30-year-old and lose their virginity.

Have potential benefits Children feel free to share their feelings with a bot if this allows them to express themselves in a place where they feel safe. But the risk of insulation is real. Small reminders that there is no person from the other end of your conversation can be useful and interfere with the cycle of addiction that technological platforms are so skilled in capturing children through multiple times Dopamine hits is a good starting point. If we do not provide these types of interventions, as social media started to take over, it is part of how we got here in the first place.

But these defenses will not cope with the main problems that lead to children who seek the support of chatbots first. There is a severe lack of resources to facilitate real life relationships for children. Classrooms are too stuffed and underfined., Once the school programs are in declinethird places“Continue to disappear and have a deficiency of child psychologists To help children process everything they do. It is a good idea to remind the children that chatbots are not real, but it would be better to put them in situations where they do not feel like they should talk to bottles first.

 
Report

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *