TechnologyExperts warn of danger to humanity if AI is not stopped

Experts warn of danger to humanity if AI is not stopped

Humanity is in danger from “artificial intelligence experiments” and they must be stopped to avoid risk, according to more than 1,000 experts.

Researchers must stop working on developing new AI (artificial intelligence) systems for the next six months, and if they don’t then governments must step in, they warned.

Such is the stark conclusion of a new open letter signed by experts, including academics in the field and technology leaders, such as Elon Musk and Apple co-founder Steve Wozniak.

The letter notes that the positive possibilities of AI are significant. He affirms that humanity “can enjoy a flourishing future” with technology, and that we can now enjoy an “AI summer” in which we adapt to what has already been created.

But if scientists keep training new models, then the world could be in for a much tougher situation. “In recent months, AI labs have entered an unbridled competition to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict, or control,” the authors write. letter.

The most advanced AI system available to the public at the moment is GPT-4, developed by OpenAI, which was released earlier this month. The letter calls for AI labs to halt development of any system more powerful than that, for at least the next six months.

“The pause must be public and verifiable, and include all key decision-makers. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the authors write.

During those six months, both AI labs and experts should create new principles for the design of AI systems, they say. Such principles would imply that any system built within them is “secure beyond reasonable doubt.”

It would not mean a pause in AI work in general, but rather a suspension of development of new models and capabilities. Instead, research “should be refocused on making today’s powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned, reliable, and loyal.”

The same pause could also give lawmakers time to create new governance systems to scrutinize AI. It would entail the appointment of authorities who can track their development and ensure that they do not serve dangerous ends.

For now, the letter includes signatures from the founders and CEOs of Pinterest, Skype, Apple and Tesla. It also includes experts in the field from universities like Berkeley, Princeton, and others.

Some researchers at companies that are working on their own AI systems, such as Deepmind, the UK artificial intelligence company owned by Google’s parent company Alphabet, also signed the letter.

Elon Musk was one of the founders of OpenAI and contributed funding when it launched in late 2017. But in recent months he has apparently become more opposed to his work, arguing that he is becoming obsessed with creating new systems and developing them. improperly for profit.

Translation of Michelle Padilla

Daily Global Times
Daily Global Times provides the latest news from India and around the world. Read the latest headlines and news stories, from Politics, Entertainment, Tech and more.

Related Article

Editors Picks