Coalition of AI leaders sees ‘societal-scale risks’ from the technology’s misuse
A statement issued today and signed by more than 375 computer scientists, academics and business leaders warns of profound risks of artificial intelligence misuse and says the potential problems posed by the technology be given the same urgency as pandemics and nuclear war.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” said the statement issued by the Center for AI Safety, a nonprofit organization dedicated to reducing AI risk.
The statement caps a flurry of recent calls by AI researchers and companies developing AI-based technologies to impose some form of government regulation on models to prevent them from being misused or creating unintended negative consequences.
Earlier this month, OpenAI LLC Sam Altman told the Senate Judiciary subcommittee that the U.S. government should consider licensing or registration requirements on AI models and that companies developing them should adhere to an “appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results.”
Microsoft Corp. last week called for a set of regulations to be imposed on systems used in critical infrastructure as well as expanded laws clarifying the legal obligations of AI models and labels that make it clear when a computer produces an image or video.
Altman and OpenAI Co-Founder Ilya Sutskever were among the signatories to today’s statement. Others include Demis Hassabis, chief executive of Google LLC’s DeepMind; Microsoft Chief Technology Officer Kevin Scott; cybersecurity expert Bruce Schneier; and the co-founders of safe AI unicorn Anthropic PBC. Geoffrey Hinton, a Turing Award winner who earlier this month left Google over concerns about AI’s potential for misuse, also signed the statement.
Eight risk areas
The Center for AI Safety sites eight principal risks that are inherent in AI. These include military weaponization, malicious misinformation and “enfeeblement,” in which “humanity loses the ability to self-govern and becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E.”
The center also expresses concerns that highly competent systems could give small groups of people too much power, exhibit unexplainable behavior and even intentionally deceive humans.
All this activity comes after the public release last November of OpenAI’s ChatGPT intelligent chatbot. The uncanny humanlike interactive capabilities of the generative model have galvanized attention around AI’s potential to replace human labor and have given birth to a host of competitors.
However, subsequent media reports detailing the tendency of models to sometimes also exhibit bizarre and “hallucinatory” behavior have also raised concerns about the black box nature of some AI models and sparked calls for better transparency and accountability.
The statement drew praise from many quarters. “If large language models continue to advance, they will surpass human ability tenfold,” wrote Nimrod Partush, vice president of AI and data science at cybersecurity analytics firm Cyesec Ltd., in emailed comments. “There is potential for a real existential risk for mankind. I am leaning toward seeing AI as a benevolent force for humanity, but I would still recommend extreme precautions.”
“Philosophically I believe the private sector should take care of AI governance but that’s not going to happen,” said Ken Cox, president of web hosting service Hostirian LLC. “Unfortunately, I believe the government should have some regulations on AI, but they need to be minimal and we need great leaders stepping up and educating through the process.”
However, not everyone is convinced of AI’s doomsday potential and some questioned the group’s motives in publicizing the statement so aggressively.
“If the top executives of the top AI companies believe AI creates a risk of human extinction, why don’t they stop working on it instead of publishing press releases?” wrote software developer Dare Obasanjo on Bluesky Social.
“Their macho chest-thumping is pure marketing,” wrote media pundit Jeff Jarvis.
“To the extent these risks are real, and many of them are, it’s up to them, the developers and companies that own this technology and will use it, to come together and create industry standards,” tweeted Yaron Brook, chairman of the board at the Ayn Rand Institute. “Stop running to government to solve your issues.”
Business executives have been transfixed by the topic. A recent survey of senior executives by Gartner, Inc. found that AI is the technology they believe will most significantly impact their industries over the next three years.
Image: Unsplash
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU