AI
AI
AI
LG AI Research and generative artificial intelligence infrastructure company FriendliAI Inc. today announced a strategic partnership to make LG’s new AI model EXAONE 4.0 publicly available for the first time through FriendliAI’s serverless inference infrastructure.
EXAONE 4.0 is LG AI Research’s latest breakthrough in large language models, delivering advanced reasoning and natural language generation suited for a wide range of enterprise applications. Powered by FriendliAI’s optimized inference, EXAONE 4.0 can be deployed at scale efficiently and offers high throughput and low latency without the usual infrastructure constraints, the companies say.
Using FriendliAI’s Serverless Endpoint, organizations can integrate EXAONE 4.0 into their workflows with enterprise-grade performance and reliability. The partnership removes technical barriers that have historically limited access to powerful language models.
“Through our partnership with LG, we’re giving developers, from startups to global enterprises, a frictionless way to quickly begin deploying the EXAONE 4.0 model,” explains Byung-Gon Chun, founder and chief executive officer of FriendliAI. “With just a few lines of code, any organization can harness the full potential of EXAONE 4.0 without the usual complexity.”
LG AI Research debuted its EXAONE series of LLMs in 2021 with EXAONE standing for “Expert AI for Everyone.” The multimodal large-scale AI models excel with an ability to process both language and visual data. EXAONE 4.0 is an upgraded integrated model that builds on the LLM‑based EXAONE 3.5, by adding inference functionality, including self‑verification capabilities to produce high‑level answers, making it useful in specialized fields such as science, mathematics, biology and chemistry.
The collaboration highlights an industry shift toward more accessible and efficient deployment of generative AI models. Organizations can sidestep the traditionally high costs and engineering overhead associated with running large models by leveraging FriendliAI’s serverless architecture, allowing for faster experimentation and product development cycles.
FriendliAI supports hundreds of thousands of models via Hugging Face, bringing model coverage and performance tuning capabilities to the partnership. The company’s infrastructure is designed to accelerate agentic and custom AI deployments, giving enterprises the flexibility to fine-tune and serve models with minimal latency.
“FriendliAI’s high-performance platform makes it easy for organizations to test and deploy applications built using the new series of EXAONE 4.0 models without investing in infrastructure,” said Hwayoung (Edward) Lee, vice president and lead of the AI Business Transformation Unit at LG AI Research. “Their inference platform delivers the efficiency and speed needed for real-world deployment of EXAONE 4.0 at scale.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.