AI Rules 2026: Godfather of AI Calls for Tough Action Before It’s Too Late
New York, September 23 – A strong demand has been raised this week for creating strict global rules on Artificial Intelligence. The push came as the United Nations High-Level Week began in New York. More than 200 experts, leaders and Nobel Prize winners joined the call. They warned that the world must take strong steps by 2026 to control the use of AI. One of the most powerful voices among them was Geoffrey Hinton, who is often called the “Godfather of AI.”
The concern is not new. AI has been making headlines in recent years for both good and bad reasons. On one side, it is helping people with research, healthcare, communication and business. On the other side, it is being linked to dangerous risks. Experts believe that without clear rules, AI could cause serious harm. These harms may include fake news, social unrest, job losses, and even global health threats.
To push this demand, the group released a joint statement called “Global Call for AI Red Lines.” In this statement, they said that there must be clear boundaries on how AI can be used. They warned that if AI continues to grow without limits, it could harm societies across the world. Nobel Peace Prize winner Maria Ressa presented the statement. She said that the speed of AI development is frightening. According to her, the international community should act quickly. She suggested that world governments need to sign an agreement that defines where AI can be used and where it cannot.
The demand is also linked to growing evidence of misuse. In recent months, many reports have surfaced where AI was linked to crimes or dangerous activities. Fake news created with AI has already misled millions. Experts also pointed out a worrying rise in suicides among teenagers who were exposed to harmful AI-driven content. The use of AI to develop deadly weapons is another major concern. If such weapons fall into the wrong hands, the results could be catastrophic.
Geoffrey Hinton’s voice in this demand is especially powerful. Hinton is one of the pioneers of AI research. He worked for years on neural networks, which today form the backbone of AI tools. Earlier this year, he left his role at Google to speak openly about AI’s dangers. By signing the statement, he is again warning governments and the public that time is running out. He believes that without strict rules, AI could grow beyond human control.
Other Nobel laureates also added their names to the statement. These include Jennifer Doudna, known for her work on CRISPR gene editing, economist Daron Acemoglu, and physicist Giorgio Parisi. Each of them stressed that AI is not just a technical issue but a global human issue. Writer Yuval Noah Harari also gave his support. He said that if humans fail to regulate AI, it may destroy the very foundation of humanity. His words reflect the fear that machines may soon shape human lives in ways we cannot reverse.
The deadline of 2026 was not chosen by chance. Experts believe that within the next two to three years, AI will become even more powerful. By that time, AI could deeply affect economies, politics, and security. That is why the group wants governments to create strong rules before it is too late. They said that the world must not wait for disasters before taking action.
The discussion at the UN is seen as a turning point. Until now, many governments have been slow to act. Some are focusing on the benefits of AI for business and technology. But the joint demand from so many respected names may change the debate. By bringing together leaders, scientists, and Nobel Prize winners, the group has shown that the issue is bigger than just technology. It is about safety, trust, and the future of humanity.
The risks they pointed out are wide-ranging. Unemployment is one of the biggest concerns. As AI takes over jobs in factories, offices, and even creative industries, millions of people could lose work. Human rights are another risk. AI tools could be misused by governments to watch and control citizens. Health threats also cannot be ignored. Some fear that AI could even play a role in spreading future pandemics if used in dangerous experiments.
Supporters of AI argue that these fears are overstated. They say that AI is only a tool and can be used for good. But critics respond that without rules, even good tools can turn into threats. The debate is now at the center of global attention. The question remains: will world leaders act fast enough?
The next two years will show if governments listen to these warnings. If they create strong and fair rules by 2026, AI could remain a tool for progress. But if they delay, experts fear that the world may face crises we cannot control. For now, the voices of Geoffrey Hinton, Nobel winners, and global leaders are loud and clear. They want action before it is too late.