Tech executives, researchers and others call for pause on cutting-edge AI model development
More than 1,000 people including prominent tech executives, researchers and authors are urging a pause on the development of advanced artificial intelligence systems.
The group called for the pause today in an open letter released by the nonprofit Future of Life Institute. The signatories include bestselling author Yuval Noah Harari, Apple Inc. co-founder Steve Wozniak, Tesla Inc. Chief Executive Elon Musk, Skype co-founder Jaan Tallinn and other tech executives. Multiple artificial intelligence researchers, including Turing Award winner Yoshua Bengio, have also signed.
The open letter urges AI labs to pause the training of machine learning systems more advanced than GPT-4. Released earlier this month, GPT-4 is the latest large language model from OpenAI LP. It can explain mathematical concepts, find cybersecurity breaches and perform other complex tasks.
The signatories to the letter are calling for the AI training pause to last at least six months. ”This pause should be public and verifiable, and include all key actors,” the letter reads. “If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Some AI researchers have called into question the motivations behind the proposal. Others expressed doubt about the feasibility of slowing the pace of machine learning advances. “The call for a 6 month moratorium on making AI progress beyond GPT-4 is a terrible idea,” tweeted longtime AI researcher and executive Andrew Ng. “I’m seeing many new applications in education, healthcare, food, … that’ll help many people. … There is no realistic way to implement a moratorium and stop all teams from scaling up LLMs, unless governments step in. Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy.”
OpenAI has not yet publicly addressed the letter, which wasn’t signed by any of the startup’s executives.
The backers of the AI development pause cited OpenAI’s recent statement that “at some point, it may be important to get independent review before starting to train future systems.” Today’s open letter argues that this “point is now.”
The letter adds that, following the implementation of the development pause, AI labs should work with independent experts to craft machine learning safety protocols. Those protocols, the signatories state, must ensure that AI systems are “safe beyond a reasonable doubt.”
The letter also calls for longer-term changes to the way machine learning research is conducted. Development efforts should be “refocused” on making today’s advanced AI systems more accurate and safe, the signatories state. The letter also calls on researchers to ensure machine learning models are transparent, robust, aligned, trustworthy and loyal.
Another proposal floated by the signatories is that developers should work with policymakers to accelerate the creation of AI governance systems. Those systems, the signatories argue, will have to cover many different areas. The open letter call on lawmakers to implement new AI oversight and safety policies, as well create “well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.”
Image: Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU