AI
AI
AI
Artificial intelligence infrastructure startup Parasail Inc. today announced that it has raised $32 million in early-stage funding.
Touring Capital and Kindred Ventures jointly led the Series A round. They were joined by several other funds including Samsung Electronics Co.’s startup investment arm.
Renting graphics processing units from a cloud provider often requires companies to sign long-term procurement agreements. That’s often not practical for startups with limited resources and enterprises working on small-scale AI pilot projects. Parasail operates an inference-optimized cloud platform, the AI Supercloud, that enables customers to buy GPU capacity on a pay-per-token basis without long-term contracts.
The most advanced graphics card that the company currently offers is the H200, which Nvidia Corp. launched in early 2024. The chipmaker has since introduced two newer GPU generations that offer significantly better performance. Some of Parasail’s GPUs run in internally operated clusters, while others are hosted by partners. The company reportedly has access to GPUs in 40 data centers across more than 15 countries.
Doing away with long-term GPU contracts isn’t the only way that Parasail’s platform streamlines inference. The company says that developers can deploy AI workloads with as few as five lines of code. Once a model is up and running, Parasail automates administrative tasks such as kernel configuration. Kernels are the GPU-optimized code snippets that make up an AI model.
Parasail enables customers to access chips in multiple ways. There are two serverless hosting options that automate much of the work usually involved in managing GPU clusters. Parasail also offers dedicated endpoints, hardware environments that trade off some simplicity for better performance.
Developers can tailor their dedicated endpoints’ configuration to each AI workload. Parasail makes it possible to define how and when new GPU capacity is added when traffic to an AI model grows. Additionally, dedicated endpoints support a neural network compression method called quantization that reduces inference costs.
Parasail’s lineup of infrastructure offerings is rounded out by a batch processing service. It’s geared toward AI workloads that process large data volumes and prioritize cost efficiency over performance. A scientific publisher, for example, could use the service to summarize academic paper archives.
“AI is becoming the core infrastructure for modern software. But the infrastructure layer itself hasn’t kept up,” said Parasail founder and Chief Executive Officer Mike Henry. “We built Parasail so teams can deploy custom AI at massive scale without negotiating contracts, managing fragmented GPU supply or hiring performance engineering teams.”
The company will use its newly raised capital to enhance its platform’s inference workload optimization features. Additionally, Parasail plans to strengthen its partner ecosystem and invest in go-to-market initiatives.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.