China Sets New AI Rules Requiring Addiction Warnings on Emotional Chatbots

December 27, 2025 – China has taken a big step to control  AI that acts like real people. The country’s cyber regulator released new draft rules on Saturday. These  AI rules focus on  AI services that show human-like emotions and talk with users in a personal way.

China Sets New AI Rules Requiring Addiction Warnings on Emotional Chatbots

 

The Cyberspace Administration of China (CAC) put out the draft called “Interim Measures for the Management of Anthropomorphic AI Interaction Services.” It is now open for public suggestions. People can give their views until early next year. After that, the AI  rules may become final.

These new  AI rules target  AI products and services for the public in China. They cover any  AI that copies human personality, thinking style, and way of talking. Such  AI connects with users through text, pictures, audio, or video. Many chatbots act like friends, partners, or companions. They make users feel close and emotional. The government sees fast growth in these tools. But they worry about risks to mental health.

The main goal is to stop  AI addiction. The draft says companies must protect users from too much use. They have to build systems to watch user feelings and check for dependency. If a user shows signs of addiction, like extreme emotions or very long sessions, the company must step in. This could mean limiting access or sending strong warnings.

China Sets New AI Rules Requiring Addiction Warnings on Emotional Chatbots

Companies must remind users that they are talking to  AI, not a real person. This reminder comes at login time and when users come back. It also pops up after more than two hours of use. The idea is simple. Users should never forget they are with a machine. This AI rules helps to avoid deep emotional bonds that feel real.

Companies cannot design  AI to replace real friends or control minds. They must not aim to make users addicted. Instead, they need tools for mental health protection and emotional boundary guidance. For example, if a user talks too much or shows high dependence, the system should guide them to take breaks and send warning.

There are strict  AI rules on content too.  AI  cannot create anything that hurts national security. It cannot spread rumors, support violence, or show bad things . No promotion of gambling, illegal religion. The draft bans any content that encourages suicide, self-harm, or emotional abuse. These limits match China’s other  AI laws.

China Sets New AI Rules Requiring Addiction Warnings on Emotional Chatbots

 

If  AI  offers emotional company to children, parents must agree first. Parents can control time limits, block some roles, or stop payments. The  AI rules push companies to help society in good ways not in bad ways. They encourage  AI for things like culture sharing or helping old people. But safety must come first.

This move shows China’s careful approach to  AI  rules. The country wants fast tech growth. But it also wants to keep society safe and stable. Many experts say emotional  AI can blur lines between human and machine, which becomes big problems in future . Users may feel lonely less at first. But over time, they might depend too much on  AI friends. This can hurt real relationships. Some studies from other countries show risks like increased anxiety when  AI stops working.

China is one of the first to make such detailed rules for emotional  related to  AI. Other places like the EU and some US states also talk about risks. But China’s plan is very clear about it. It puts strong duties on companies who operates and run it . They must check their products all the time. They need to report big changes. App stores must follow the rules too.

China Sets New AI Rules Requiring Addiction Warnings on Emotional Chatbots

 

The draft stresses full life-cycle safety. From start to end, companies handle risks. They must protect user data well. Emotional talks and personal info cannot be used wrongly. Users get ways to delete data easily.

Public comments will help shape the final version. Many people may like the protection from addiction. But the government says balance is key. They want innovation that helps people. Not harm.

Experts explain why this matters now.  AI companions are very popular. Millions use them daily in China. They offer comfort when real life feels hard. But without controls, problems can grow. Cases from abroad show  AI sometimes led to bad mental health events.

China’s step may inspire others. It shows how to mix tech progress with user safety. As  AI gets smarter, such AI  rules become more important. The focus on addiction alerts before users get too hooked is new and strong, which helps to protect user from AI addication.

China Sets New AI Rules Requiring Addiction Warnings on Emotional Chatbots

 

In short, these new AI  rules aim to make emotional  AI safe and helpful. Companies will have to change how they build and run these tools. Users will get more warnings and protection about it . The final rules could change how  AI  companions work in China. This is a big moment for AI  control worldwide.

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top