UPDATED 15:10 EST / JULY 18 2024

AI

OpenAI, Mistral AI debut new cost-efficient language models

OpenAI and Mistral AI today introduced new language models for powering applications that must balance output quality with cost-efficiency.

OpenAI’s new model, GPT-4o mini, is a scaled-down version of its flagship GPT-4o large language model. Mistral AI, in turn, debuted an algorithm dubbed Mistral NeMo 12B that was developed in collaboration with Nvidia Corp. engineers. It’s designed for many of the same tasks as GPT-4o mini and will be available under an open-source license.

OpenAI’s newest language model 

GPT-4o mini can generate text, craft code and solve math problems much like its more capable namesake. However, the model does so slightly less accurately. GPT-4o mini achieved a score of 82% on MMLU, a benchmark test used to measure the quality of language models’ output, while the original GPT-4o scored 88.7%.

GPT-4o mini trades off those few percentage points’ worth of accuracy for increased cost-efficiency. It will be available through OpenAI’s application programming interface for less than a fifth of the price at which the company offers GPT-4o. As a result, the applications that developers build using the API will be less expensive to operate. 

The model is the first from OpenAI to include a technology called instruction hierarchy. The feature, which the company first detailed in an April research paper, is designed to reduce the risk posed by malicious user input.

Services powered by an OpenAI-developed model such as GPT-4o mini often receive multiple types of prompts. There are prompts entered by an application’s developer that might, for example, instruct GPT-4o mini not to disclose sensitive data to users. Separately, the application’s users send their own requests to the model. 

Instruction hierarchy blocks malicious input by prioritizing the developer’s prompts over the ones entered by users. If a developer instructs an application powered by GPT-4o mini not to disclose sensitive data but a user asks the model to do so regardless, the model will reject the request. It can also prioritize developer-provided instructions in other situations to present applications from carrying out tasks they were not intended to perform. 

GPT-4o mini will become available via OpenAI’s API next week. The company is also rolling out GPT-4o mini to all four tiers of its ChatGPT chatbot service, including the free plan and the top-end Enterprise edition. The latter offering will receive a number of other new features as well through a separate update detailed this morning.

Companies in regulated industries such as the healthcare sector must often keep a record of internal business activities. As part of today’s update, ChatGPT Enterprise is receiving an API that will enable organizations to download a log of their employees’ interactions with the service. OpenAI is also rolling out features that will make it easier to create and delete employee accounts, as well as a tool for blocking integrations with unauthorized third-party applications.

Open-source GPT-4o mini alternative 

Like GPT-4o mini, the open-source Mistral NeMo 12B model that Mistral AI and Nvidia debuted today is designed to be more cost-efficient than frontier LLMs. It features 12 billion parameters, the configuration settings that determine how a neural network processes data. That’s significantly less than the hundreds of millions of parameters in frontier LLMs, which means Mistral NeMo 12B can perform inference using less hardware and thereby cut users’ infrastructure costs.

Nvidia detailed in a blog post that the model is compact enough to run in the memory of a single graphics processing unit. The GeForce RTX 4090, a high-end consumer GPU that the chipmaker debuted in 2022, is among the chips that can accommodate Mistral NeMo 12B. Customers can also run the model on a single RTX 4500, which is designed to power workstations, or Nvidia’s entry-level L40S data center graphics card.

The chipmaker and Mistral AI have packaged Mistral NeMo 12B into a so-called NIM microservice. That’s a preconfigured software container designed to ease the task of deploying the model on Nvidia silicon. According to the chipmaker, NIM can reduce the amount of time required to deploy a neural network from days to a few minutes.

Mistral AI and Nvidia envision developers using Mistral NeMo 12B to power chatbot services. The model also lends itself to several other tasks including code generation, translation and documentation summarization. 

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU