UPDATED 15:26 EDT / NOVEMBER 18 2021


OpenAI makes GPT-3 more broadly available to developers

OpenAI today broadened availability of its cloud-based OpenAI API service, which enables developers to build applications based on the research group’s  sophisticated GPT-3 artificial intelligence model.

Previously, developers had to sign up for a waitlist and availability was limited. 

OpenAI said that the move to make OpenAI API more widely accessible was made possible by the addition of new safety features. OpenAI has introduced a free content filter to help developers using GPT-3 in their applications detect abuse. The research group said it can review developers’ applications before they launch and monitor for misuse.

“Tens of thousands of developers are already taking advantage of powerful AI models through our platform,” the OpenAI team stated in a blog post today. “We believe that by opening access to these models via an easy-to-use API, more developers will find creative ways to apply AI to a large number of useful applications and open problems.”

GPT-3 is a natural language processing model that debuted last year. The AI can write essays on business topics, translate text and even generate software code. OpenAI makes GPT-3 available to developers through its OpenAI API service, which is a cloud-based application programming interface priced based on usage. 

GPT-3’s ability to perform tasks ranging from translation to coding is the result of a unique architecture developed by OpenAI’s researchers. GPT-3 features 195 billion parameters, the settings that determine how an AI processes data. The more settings there are to guide a neural network’s processing, the better it can carry out computations. At the time of its debut in 2020, GPT-3 featured 10 times more parameters than the closest AI in the same category.

OpenAI has added new features to the OpenAI API since first introducing the service last year to make it even more useful for AI developers. 

One of the most significant enhancements is a set of neural networks known as the Instruct series. The Instruct neural networks are specialized versions of GPT-3 that support greater customization of processing results. Thanks to the increased customizability, developers can optimize  how their AI applications carry out tasks more granularly than before.  

A software team can, for example, use the Instruct series to build an application that organizes scientific papers by topic. Developers could instruct the AI models not only to identify the topic of each paper but also to generate a one-sentence summary of its contents. Moreover, thanks to the AI models’ customizability, the developers could even specify that the summaries should be generated in an easy to understand format. 

Another new capability is a feature called Answers. The feature should make GPT-3 more useful for companies building applications such as customer support chatbots.

With Answers, a company can provide GPT-3 with information from sources such as product guides and internal knowledge bases. The company may then configure GPT-3 to answer questions based on the information it supplied. The feature could enable GPT-3 to more accurately process requests such as customer support inquiries that often require drawing on specialized knowledge related to a specific product or topic. 

The new features in the OpenAI API are the result of a push by the search group to grow the number of use cases to which GPT-3 can be applied. As part of the effort, OpenAI has also developed Codex, an AI model that can automatically generate software code based on text prompts.

Codex powers the GitHub Copilot coding assistant that Microsoft Corp.’s GitHub subsidiary debuted in July. More recently, Microsoft introduced an offering called Azure OpenAI Service, which allows customers of its Azure public cloud to access GPT-3.

Product development is one of several areas where Microsoft and OpenAI collaborate. Microsoft made a $1 billion investment in the group two years ago to support its research. Additionally, the tech giant has built a supercomputer for OpenAI in Azure that features 10,000 graphics cards and 285,000 central processing unit cores. The supercomputer is specifically designed to support the development of large-scale AI models such as GPT-3, which require a great deal of processing power to train because of their complexity.

Image: OpenAI

A message from John Furrier, co-founder of SiliconANGLE:

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

Click here to join the free and open Startup Showcase event.

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.

Click here to join the free and open Startup Showcase event.