AI companions have recently come under the cosh from zealous US politicians pushing highly regulative laws on AI chatbots. For example, back in July Californian Senator Steve Padilla approved a law that, among other things, forced AI companies to ensure that their chatbots repeatedly told their users that they were only machines. Not something you really want to hear in a moment of intimacy with your AI girlfriend. The law was especially concerning as although it would only apply in the state, California is the home of Silicon Valley. More recently, Republican senators have introduced the GUARD Act, which would mandate that all AI chatbot services verify a user’s age (not just AI companion sites), and enforce heavy fines upon any services that allowed their chatbots to provide or solicit any of a list of harms. You can read a criticism of that act at the Electronic Frontier Foundation.
Whilst these politicians may be either well-meaning or jumping on the bandwagon and exploiting concerns over so-called ‘AI psychosis’ and a handful of tragic cases, it is clear that heavy regulation of AI could strangle the technology in the USA, at a time when China is breathing down its neck in the race to AGI and beyond. Donald Trump, who has been pro-AI since the start of his second term as President, clearly understands this, and is now considering signing an executive order which would ban states from passing laws regulating AI. At the recent US-Saudi Investment Forum, which focused on AI, he justified it in terms of fighting the Woke agenda.
You can’t go through 50 states. You have to get one approval. Fifty is a disaster. Because you’ll have one woke state and you’ll have to do all woke. You’ll be back in the woke business. We don’t have woke anymore in this country. It’s virtually illegal. You’ll have a couple of wokesters.
Steve Padilla himself has reacted with fury to the President’s plans. He published a long rant on his official Californian government blog, predictably accusing Trump of interfering in California’s attempts to protect children.
Let’s be clear, this press release has no legal bearing on California law. Trump is not our king and he cannot simply wave a pen to unilaterally invalidate state law.
It is curious that he has decided to interfere with our efforts to protect children from dangerous sexual content being marketed directly towards kids. Is it a coincidence this executive order comes on the heels of this week’s dinner at the White House full of billionaires and tech CEOs?
More and more we are learning of the dangers of AI as the technology evolves. AI chatbots have encouraged several children and vulnerable users to take their own lives or harm others. Yet, in light of this evidence, the White House would rather cave to the whims of billionaire tech CEOs, leaving our kids without any safeguards protecting them from unregulated AI models that have already claimed too many lives.
Despite AI chatbots being used by virtually everybody under the age of 50 in the USA, and for much of the last 3 years, in that time there have literally been only a handful of tragic isolated cases leading to suicide. Without any large-scale regulation, this number will likely not grow, as companies like OpenAI work out better systems of guardrails. To put this into perspective, in 2022 around 2,500 Americans were murdered by their partners, and close to 15,000 Americans with intimate relationship problems commit suicide each year. A 2024 study into users of the popular AI companion platform Replika found that having an AI companion to talk to reduced the suicidal thoughts of lonely participants. Clumsy attempts to regulate AI chatbots and companions seem based on cognitive biases and a desire to be seen to be acting.
