AI
AI
AI
OpenAI Group PBC today released a policy framework with suggestions on how to address the risks posed by artificial intelligence.
The 13-page document is titled “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” It’s at least the second such paper OpenAI has published in the past two years.
The document’s release follows new reports that OpenAI insiders have expressed concerns about Chief Executive Sam Altman’s leadership. According to the New Yorker, “several executives connected” to the company have floated the idea of replacing Altman with Fidji Simo. Chief Financial Officer Sarah Friar, in turn, is reportedly not convinced about the strength of OpenAI’s plan to go public.
The company’s newly published framework includes more than two dozen policy suggestions. Some call for broad macroeconomic policy changes, while others discuss narrower topics such as power grid components. Mitigating malicious AI output is another major focus.
One of OpenAI’s fiscal suggestions is that policymakers mitigate the economic impact of AI by “increasing reliance on capital-based revenues — such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns.” The paper goes on to suggest a tax tied to automation initiatives. Another section of the paper recommends that the government create an AI-focused public wealth fund.
The paper also includes several narrower economic policy suggestions that each focus on a specific type of market participant.
OpenAI argues that workers should be given a bigger say in how their companies implement AI. Entrepreneurs, meanwhile, would be given access to shared back-office infrastructure and other forms of support if the document were to be implemented. OpenAI’s paper also covers established companies, which the ChatGPT developer argues should be given “incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&D-style credits.”
One of the policy suggestions focuses on companies in the energy sector. OpenAI argues that the U.S. government should more actively support the development of power generation capacity. According to the papers, policymakers could go about the task by providing utilities with financial incentives and easing access to grid components such as advanced conductors.
More than a half-dozen of the remaining policy suggestions in the paper focus on AI safety. OpenAI says that the U.S. government should develop “coordinated playbooks to contain dangerous AI systems once they have been released into the world.” Policymakers, adds the paper, should create a way for companies to report AI safety incidents.
“These ideas are ambitious, but intentionally early and exploratory,” OpenAI stated in a blog post that accompanied the paper. “We offer them not as a comprehensive or final set of recommendations, but as a starting point for discussion.”
The company plans to host discussions about its suggestions at a Washington, D.C., hub called the OpenAI Workshop that will open next month. Additionally, OpenAI will issue grants of up to $100,000 to AI policy researchers along with up to $1 million worth of API credits.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.