UPDATED 11:00 EDT / APRIL 15 2024

AI

AI is becoming more powerful and more expensive, and people are getting nervous

The Stanford Institute for Human-Centered Artificial Intelligence today published the seventh edition of its annual AI Index, one of the most comprehensive reports on the state of the artificial intelligence industry, and its main findings are unsurprising if a bit unsettling.

The report finds that AI development is proceeding at breakneck speed as developers churn out increasingly powerful and sophisticated models every month. Yet despite this accelerated development, the industry has made little progress in addressing fears around AI explainability and the growing nervousness over its impact on people’s lives.

Recognized as one of the most credible and authoritative sources for data and insights into the state of the AI industry, the AI Index is meant to enable decision makers to take more meaningful action in order to advance AI responsibly and ethically, with humans in mind.

This year’s edition is the most comprehensive report so far, arriving at a crucial juncture, as the influence of AI on society has never been more pronounced. Few people these days are unaware of just how powerful the most advanced AI models have become. That’s particularly true of generative AI, which has captured the public’s imagination with its humanlike reasoning capabilities and ability to respond to questions and create images on a level comparable with human experts.

New chapters in the 2024 AI Index include estimates on AI training costs, a more detailed analysis of the responsible AI landscape, and one that’s dedicated to the impact of AI on science and medicine.

AI is becoming more powerful

The major finding of the report was that progress accelerated much faster in 2023 than in any year prior, with state-of-the-art systems such as GPT-4, Gemini and Claude 3 displaying impressive multimodal capabilities, with the ability to generate fluent text in multiple languages, process audio and images and explain internet memes. The rampant progress of AI has had a profound impact on many people’s lives, with businesses racing to develop AI tools that can enhance productivity and a growing segment of the general public using the technology on a daily basis.

Image: Freepik

The report found that the number of newly released large language models that power generative AI doubled in 2023 compared to the year prior. Of those LLMs, two-thirds were open-source models such as Meta Platforms Inc.’s Llama 2, but the most capable were closed-source models such as Google LLC’s Gemini Ultra.

According to the report, Gemini Ultra was the first LLM to achieve human-level performance on the key Massive Multitask Language Understanding benchmark. Not to be outdone, OpenAI’s GPT-4 achieved a 0.96 mean win rate score on the Holistic Evaluation of Language Models benchmark, which incorporates the MMLU alongside other evaluations.

Stanford’s HAI researchers said the most advanced models can now beat humans on many tasks, but not all. For example, they can outperform humans in tasks such as image classification, visual reasoning and English language understanding, yet they fail in more complex tasks such as competition-level math and visual common-sense reasoning and planning.

AI is expensive, but investors are happy to foot the bill

The increased performance of AI comes at a cost, though, with the report finding that the development of frontier AI models is becoming far more expensive. Its estimates show that training costs for state-of-the-art models have reached unprecedented levels, with Gemini Ultra said to have consumed $191 million worth of compute resources, and GPT-4 costing an estimated $78 million to develop.

Image: Microsoft Designer

Luckily for AI developers, they’ve been able to find people willing to shoulder these costs. While global private investment in AI fell for the second year in a row, investment in generative AI specifically grew exponentially. The report found that AI companies such as OpenAI, Anthropic PBC, Hugging Face Inc. and AI21 Labs Inc. raised more than $25.2 billion in 2023, up more than eight times what the industry raised in the prior year.

This investment is being fueled by enterprise demand, the report concludes, noting that more Fortune 500 earnings calls mentioned AI than ever before, with numerous studies published last year highlighting how the technology can dramatically improve worker productivity.

AI is accelerating productivity and science

In terms of AI applications, multiple studies last year assessed its impact on labor and found that the technology can assist humans in completing tasks more rapidly while improving the overall quality of their work. Many studies also illustrated AI’s potential to bridge the skill gap between low- and high-skilled workers, though some also cautioned against using the technology without proper oversight, as evidence suggested its unrestricted deployment can result in diminished performance.

Image: Microsoft Designer

Besides the impact on worker productivity, AI is also helping to accelerate scientific progress in many areas, the report found. Last year saw the availability of more powerful, science-focused AI applications than ever before, with AlphaDev, which improves the efficiency of algorithmic sorting, and GNoME, which facilitates the process of materials discovery, just two examples.

AI is making people nervous

Despite all of the cash being splashed on AI and the undeniable progress that’s being made, or perhaps because of that, there are reasons to be fearful of the technology and its potential shortcomings. The report noted that AI still faces significant problems regarding its ability to deal with facts reliably, perform complex reasoning and explain the conclusions it comes up with.

One of the main challenges is the lack of robust and standardized evaluations for LLM responsibility, the report notes. It explains that leading AI developers such as OpenAI, Google and Anthropic all tend to use their own responsible AI benchmarks, complicating efforts to compare the limitations and risks of the models they have created systematically.

Image: Microsoft Designer

Governments are at least aware of the need to respond to these concerns, with global mentions of AI in legislative proceedings becoming more frequent than ever. In the U.S., regulators passed more AI-related regulations in the last year than ever before, with 25 new rules established, up from just one in 2016.

Whether or not regulation will help to stem concerns from the public remains to be seen, as the study highlighted numerous reasons to be fearful of AI’s growing power.

As more people become aware of AI’s accelerating capabilities, they’re responding with increased nervousness, the report found. For instance, a study by Ipsos Group S.A. found that 66% of respondents are convinced AI will “dramatically” affect their lives within the next three to five years, up from 60% a year earlier. Moreover, a separate study by the Pew Research Center found that 52% of Americans are more concerned than excited about the rise of AI, up from just 38% in 2022.

Featured image: Stanford Institute for Human-Centered Artificial Intelligence

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU