UPDATED 17:06 EST / DECEMBER 04 2024

AI

OpenAI to host 12-day product announcement series with new reasoning model, Sora expected

OpenAI will host 12 livestreams over 12 days to announce new artificial intelligence products.

The ChatGPT developer previewed the product launch series today. The company didn’t specify what it plans to announce, saying only customers can expect “a bunch of new things, big and small.” The first livestream is scheduled for Thursday. 

Sources told The Verge that one of the products in the pipeline is a “new reasoning model.” This suggests OpenAI might release a successor to o1, its flagship reasoning-optimized large language model. When it debuted the LLM in September, the company disclosed plans to release multiple new versions with improved capabilities down the road.

The current iteration of o1 can complete challenging tasks, such as deciphering scrambled text, that comprise dozens of steps. In one internal evaluation, OpenAI tested the model by having it tackle GQA Diamond, a collection of complicated science questions. The LLM performed better than a group of experts with doctorates.

OpenAI developed o1 using a new implementation of reinforcement learning, a popular approach to developing LLMs. The company says that a model trained with this technology “consistently improves” as the amount of compute capacity used to train it increases. The reasoning model OpenAI will reportedly debut could be a version of o1 that was trained using more compute capacity to boost the quality of its output.

Alternatively, the company may launch a version of the LLM optimized for cost efficiency. 

OpenAI currently offers o1 in two versions: the flagship o1-preview edition and o1-mini, which trades off some output quality for lower pricing. The company could launch a midrange version situated between o1-mini and o1-preview on the price-performance scale. Rival Anthropic PBC has taken a similar approach with its Claude lineup of LLMs.

According to The Verge, OpenAI will also launch its long-anticipated Sora model. The algorithm was first previewed in February. It can generate one-minute videos based on natural language prompts or an image, as well as extend an existing clip both forward and backwards in time.

Sora’s capabilities were developed through a new approach to AI training that OpenAI detailed earlier this year. 

The company trained the model on a dataset that comprised videos and descriptive captions. OpenAI generated those captions automatically with the help of an AI model it created specifically for the task. According to the company, this arrangement improved the quality of Sora’s output. 

Besides adding AI-generated captions to the videos in Sora’s training dataset, OpenAI also compressed those videos into a latent space. This is a mathematical structure that stores key information from a file and discards the rest. The resulting reduction in the file’s storage footprint lowers training costs and improves AI models’ performance.

Today’s report that Sora will debut during OpenAI’s product launch series isn’t the first indication the model could roll out in the near future. In March, then-Chief Technology Officer Mira Murati disclosed that the company was planning to launch the model before the end of the year. It’s possible the version of Sora that will roll out this week offers more advanced capabilities than the original preview release that OpenAI detailed in February.

Photo: Focal Foto/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU