UPDATED 21:22 EDT / SEPTEMBER 13 2023

AI

Musk says AI could ‘kill us all’ as tech luminaries attend closed-door session at Congress

Elon Musk and other leaders of the technology industry joined U.S. senators at a closed-door session where they discussed the benefits and challenges of artificial intelligence, as well as the dangers it might pose to society.

At the end of the gathering, Musk (pictured) spoke with reporters briefly and, in his typical candid or at least attention-getting style, told them he believes there is an “above zero” chance that AI may “kill us all.”

“I think the chance is low,” he added. “But if there’s some chance, I think we should also consider the fragility of human civilization.”

The technology billionaire, who owns Tesla Corp., SpaceX Corp. and X Corp., formerly known as Twitter, said he supports the creation of a government agency tasked with overseeing the rise of AI and protecting society from the possible dangers it poses. “The consequences of AI going wrong are severe so we have to be proactive rather than reactive,” Musk said.

The meeting, officially known as the AI Insight Forum, brought together Musk, Meta Platforms Inc. Chief Executive Mark Zuckerberg, Microsoft co-founder Bill Gates, OpenAI LP co-founder and CEO Sam Altman, Nvidia Corp. CEO Jensen Huang and several other tech industry luminaries at the U.S. Capitol. There, they discussed the priorities and risks of AI and shared their thoughts on how it should be regulated.

Sen. Chuck Schumer (D-N.Y.), who convened the session, said Gates, who nowadays is mostly focused on philanthropy, spoke of the potential benefits of AI, such as helping to create ways to address world hunger. Alongside the technology executives, dozens of union leaders, civil rights advocates, AI researchers and others also attended the meeting.

Schumer told reporters that he asked everybody present if they shared his belief that the government should play a role in regulating AI. According to him, every single person present raised a hand to agree.

“No one backed off in saying we need government involvement,” Schumer told the Wall Street Journal. He said they agreed it was necessary because, even if the companies present agree to create guardrails for AI, there will be other firms that will not do so.

The discussions also focused on many of the risks posed by AI. In particular, some attendees expressed concern about open-source AI systems that anyone can download and modify. Using these systems, companies can build on foundational large language models such as the one that powers ChatGPT, and customize them to perform many different tasks – without investing the millions of dollars required to train them.

Tristan Harris, co-founder and executive director of the Center for Humane Technology, which is a nonprofit that aims to align technology with the best interests of humanity, reportedly said there is a considerable risk that open-source AI systems can be abused by bad actors. According to Harris, his organization was able to coax Meta’s recently launched open-source Llama 2 model to describe a way to create dangerous biological compounds.

In response, Zuckerberg is said to have claimed it’s possible to find these instructions on the public internet already. The Meta boss agreed that open-source AI systems are not without risks, but insisted that his company was putting a lot of effort into making them as safe as possible.

Zuckerberg’s main argument for the open-source approach is that the technology “democratizes access” to the most advanced AI tools. He pointed out that there are only a handful of companies in the world that are capable of building such powerful tools as Llama 2, and said by making them open it levels the playing field and fosters more innovation.

Sen. Maria Cantwell (D-Wash.) also raised concerns from workers who see AI as a possible threat to their livelihoods. She recalled a discussion she had with Meredith Stiehm, the head of the Writers Guild of America West, whose members have gone on strike partly in response to fears that studios will use AI to eliminate their jobs eventually.

As he departed the meeting, Musk told reporters that he doesn’t believe Congress is ready to regulate AI at the moment, saying it should study the issue before creating any legislation.

Regulation is very much on Schumer’s mind. He told the Journal that the gathering is the first in a series of meetings he aims to convene in order to develop a framework to address the rapid growth of AI. He added that he intends to pass some kind of legislation “within months” but didn’t offer a more specific time frame.

Schumer’s biggest headache is that lawmakers are divided over what the legislation will actually cover. The issues raised in today’s meeting covered a number of concerns, including invasion of privacy, copyright violations, racial bias, economic ties with China and other geopolitical rivals, and the use of AI technology by the military.

Photo: Dan G/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU