

Newly launched artificial intelligence startup Portkey.ai today announced that it has raised $3 million in seed funding.
Lightspeed Venture Partners led the round. According to Portkey.ai, the venture capital firm was joined by “prominent figures” from Amazon Web Services Inc., OpenAI LP, Cloudflare Inc. and other major enterprise technology companies.
AI providers such as OpenAI sell their large language models through application programming interfaces, or APIs. Integrating those APIs into an application can theoretically take as little as a few hours. In practice, however, the task often requires weeks of work because companies must write a large amount of supporting code.
Portkey.ai, officially Portkey Inc., provides that supporting code in a ready-to-use form via a cloud platform. Portkey claims its platform can help companies build AI software faster by reducing the number of features they must implement from scratch. Moreover, the startup promises to reduce the associated costs.
“Tech chiefs are facing a rush of demand from teams for AI apps that will save money without too much delay. But they cannot say yes to all their requests,” said.ai co-founder and Chief Executive Officer Rohit Agarwal. “There’s so much work to be done that there are often competing priorities.”
Much of the custom code in AI applications is designed to mitigate the impact of potential technical issues. According to Portkey.ai, its platform eases that task by providing prepackaged error mitigation features.
If a language model API goes offline, the applications that use it to process data can experience technical issues. To address this risk, the company provides a tool that detects when a language model API fails to process the requests it receives from a workload. It can then either retry the request or send it to a backup AI model.
Many language model APIs are priced based on usage. As a result, reducing an AI application’s API usage can help developers lower costs. Portkey.ai promises to ease that task as well.
Using the platform, a software team can cache the answer that an AI generates in response to a commonly recurring user question. The next time a user submits that question, the corresponding answer may be retrieved from cache. This arrangement removes the number of user queries that have to be sent to the language model API an application uses, which lowers machine learning expenses.
Portkey.ai provides a monitoring dashboard for tracking how an application uses language model APIs. The dashboard tracks metrics such as the number of requests that are sent to each API and the amount of data those requests contain. It also monitors the associated costs.
Though it only launched in January, Portkey.ai says that it already processes millions of application requests per day for customers. One of those customers is Postman Inc., a development tooling startup that received a $5.6 billion valuation in 2021.
The company will use its seed funding round to hire more employees. Additionally, it plans to roll out more features for its platform. As part of the effort, Portkey.ai intends to build a tool that will help developers test new language models for potential technical issues before integrating them into an application.
THANK YOU