AI
AI
AI
Lazarus Enterprises Inc. says it’s going to help organizations get around the “last mile” problem in artificial intelligence with the launch of its new Applied Intelligence Engine.
Announced today, it’s a model-agnostic infrastructure platform that’s designed for regulated industries such as healthcare, financial services and government – sectors where so-called “hallucinations” aren’t just a nuisance, but a deal breaker for AI adoption.
According to a 2025 study by the Massachusetts Institute of Technology, a staggering 95% of AI pilots at regulated enterprises fail to make it into production. It’s a reality that means information technology teams are trapped in a cycle of expensive experimentation without anything to show for it.
Lazarus says one reason for the failure to bring AI projects into production is that organizations spend too much time worrying about foundation models, and not enough on the underlying data architecture that supports them. In many cases when hallucinations occur, it’s not the fault of the model, but the way it interprets the data that informs its responses.
Lazarus’ Applied Intelligence Engine is a modular layer that sits between enterprises’ data and whatever large language models it wants to use. This layer decouples the model from the organization’s operational workflows, and the startup says it’s the secret sauce that can help to improve the accuracy of their responses, enabling organizations to move from experimentation to full-scale deployment.
“The problem is not the model,” Lazarus Chief Executive Alex Panait wrote in a blog post announcing the launch. “The problem is the architecture around the model. Most enterprise AI fails not because the underlying models are weak, but because the systems wrapping them are fragile.”
Unlike simple “wrappers” that just pass on user’s prompts to an application programming interface, Lazarus’s Applied Intelligence Engine is made up of three sophisticated modules. These include a task execution module, or pipeline, that uses “problem engineering” to ensure deterministic and auditable outputs. The knowledge augmentation module acts like a retrieval system, which pulls data from knowledge graphs and vector databases to ground AI responses in verifiable corporate data.
Finally, the automated orchestration module is an “agentic framework” that breaks up multistep workflows into individual tasks and assigns them to whatever model is best suited to performing them. It also supports “human-in-the-loop” escalation.
These modules combine to enable AI processes to run continuously and coordinate execution across multiple systems, Panait said.
According to Lazarus, its structured approach to AI execution is already having an impact in the wild. Early adopters include one unnamed healthcare provider, which saw urgent prior authorization review cycles decrease from several days to a matter of hours, representing a 75% increase in efficiency. Meanwhile, a reinsurance customer used Lazarus’s platform to identify about $30 million in hidden tail-risk exposure that its traditional portfolio sampling methods had missed.
These gains were made possible because Lazarus’ platform has been engineered to prioritize accurate extraction and evidence-based reasoning over generative fluency, Panait explained. The company benchmarks the system for the value it delivers, as opposed to “vanity metrics.” “In domains where being wrong is expensive, the quality of each decision matters more than the quantity of AI touchpoints,” he stressed.
Enterprise-grade features include a governance control plane that enables compliance teams to bake company policies directly into the AI’s runtime. Should a model start to drift, or should costs begin ramping up dramatically for a specific task, the control plane will automatically halt that workflow before it results in regulatory or financial exposure.
Deployment is designed to be versatile too. The Applied Intelligence Engine can be integrated with existing cloud and on-premises infrastructure, meeting enterprise’s data wherever it lives.
Perhaps the best thing about Lazarus is the way it allows for model independence. Companies can quickly and easily swap out models based on their needs, avoiding being locked in to a specific model provider.
“Our Applied Intelligence Engine addresses the operational, regulatory and infrastructure barriers that slow adoption in regulated industries,” Panait said. “With the right combination of problem engineering, context engineering and prompt engineering, more than 90% of Lazarus pilots translate into real-world deployment and measurable results.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.