Elon Musk and technological leaders call for a pause in the Artificial Intelligence race, considering that it is “out of control”

Senior leaders in the tech industry are calling for artificial intelligence (AI) labs to stop training the most powerful AI systems for at least six months, citing “profound risks to society and humanity.”

Among these leaders is the CEO of Tesla Elon Musk, as well as professors and researchers who signed the letter published by the Future of Life Institute, a nonprofit organization backed by Musk.

The letter comes just two weeks away, after OpenAI announced GPT-4, an even more powerful version of the technology behind the AI ​​viral chatbot tool, ChatGPT. In early tests and a company demo, the technology was demonstrated by writing demands, passing standardized tests and creating a working website from a hand-drawn sketch.

The information in the letter said that the pause should apply to AI systems “more powerful than GPT-4.” He also said that independent experts should use the proposed pause to jointly develop and implement a set of shared protocols for AI tools that are secure “beyond a reasonable doubt.”

The letter further stated, “Advanced AI could represent a profound change in the history of life on Earth, and must be planned and managed with care and resources.” “Unfortunately, this level of planning and management is not happening, despite the fact that in recent months AI labs have been embroiled in an out-of-control race to develop and deploy ever more powerful digital minds than anyone, not even their creators, can reliably understand, predict, or control.”

The wave of attention surrounding ChatGPT late last year helped renew an arms race among tech companies to develop and implement similar AI tools in their products. OpenAI, Microsoft, and Google are at the forefront of this trend, but IBM, Amazon, Baidu, and Tencent are all working on similar technologies. A long list of startups are also developing AI writing assistants and image generators.

Artificial intelligence specialists are increasingly concerned about the potential for AI tools to give biased answers, the ability to spread misinformation, and the impact on consumer privacy. These tools have also raised questions about how AI can change professions, allow students to cheat, and change our relationship with technology.

The information in the letter hints at the broader discomfort inside and outside the industry with the rapid pace of advancement in AI. Some government agencies in China, the European Union, and Singapore have introduced older versions of AI governance frameworks.

Published by WildWestDominio, news and information agency.

Leave a Reply

Your email address will not be published. Required fields are marked *