UPDATED 15:00 EDT / AUGUST 20 2024

The letters "GPT-4o" on an abstract pink and blue background AI

OpenAI makes fine-tuning for GPT-4o customization generally available

OpenAI today announced the launch of fine-tuning capability for its flagship GPT-4o artificial intelligence large language model, which will allow developers to create custom versions for specific use cases.

GPT-4o is OpenAI’s largest and most complex model capable of responding in real-time to text, audio and video. It can reply to voice inputs so quickly that it’s almost like speaking to another human being and it can do it while viewing streaming video.

Fine-tuning is an AI technique used to adjust an already pre-trained model to suit a specific task or dataset. Pre-trained models come with a lot of general information stored in them, often taken from a broad dataset that covers a wide variety of subjects so they are more or less a jack of all trades but masters of none. When fine-tuning, the goal is to adapt the model for specialized use or knowledge domain. It’s similar to training an employee for a particular job, making them better and more efficient in an expert role.

With the new fine-tuning capability, developers can now train GPT-4o with custom datasets to get higher performance at lower costs for specific use cases to change the tone or behavior of the model.

For example, the model could be fine-tuned to act as a professional tutor for a college-level coding course where students are learning to program in C++ and Ruby. The custom dataset would include specific knowledge of the textbooks they were expected to learn from, the quizzes and tests they would encounter and the type of behavior it would be expected to exhibit.

In testing, OpenAI said that fine-tuned GPT-4o models have produced excellent results. Distyl AI Inc., an AI solutions company that partners with Fortune 500 companies, recently placed first in the BIRD-SQL benchmark, the leading text-to-SQL benchmark.

Using a GPT-4o model fine-tuned on SQL queries and other tasks, OpenAI said that Dystl achieved an execution accuracy of 71.83% on the leaderboard. It also excelled at query reformulation, intent classification, chain-of-thought and self-correction.

Fine-tuning for GPT-4o and GPT-4o mini, the cost-efficient small model, is available for developers on all paid usage tiers.

OpenAI said GPT-4o fine-tuning training will cost $25 per million tokens and once deployed it will cost $3.75 per million input tokens and $15 per million output tokens. Developers will receive 1 million free training tokens per day for GPT-4o and 2 million per day for GPT-4o mini per day through Sept. 23 to make easy to use the fine-tuning service, the company added.

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU