

Startup Fortanix Inc. has announced the first full-fledged service on its confidential computing platform, giving companies an easier way to improve the quality and accuracy of artificial intelligence models and keep them secure.
Fortanix Confidential AI helps companies and organizations in privacy-sensitive industries such as financial services and healthcare to safely use private data to train AI models, the company revealed today.
The offering is based on Fortanix’s confidential computing platform, which taps the Software Guard Extensions or SGX technology in Intel Corp.’s latest central processing units to protect data and code while it’s being processed.
The SGX technology makes it possible to create “enclaves” within a server’s memory that are isolated from the rest of the machine. Those enclaves are inaccessible not just to other workloads, but also the operating system and the hypervisor that manages the server.
SGX does, however, provide a way to check each protected memory enclave for malicious modifications. An application running in that enclave can freely access sensitive data without exposing it, even if a hacker is able to gain access to the underlying server hardware or operating system.
Where Fortanix comes in is that it helps users take advantage of SGX’s capabilities via a simple user interface, so there’s no need to write extensive code modifications.
Fortanix said Confidential AI is the first of what will eventually become a suite of different services that leverage confidential computing. It makes sense to apply confidential computing to AI, because it’s an area where many projects have been held up because of an unwillingness to risk using sensitive data to train models.
Instead of using private data that could make their models more accurate and reliable, a lot of companies are instead forced to make educated assumptions. With Fortanix Confidential AI, companies can use their most private data without compromising privacy or risking falling out of compliance, the company said. So users should be able to create more powerful AI models than before.
“For today’s AI teams, one thing that gets in the way of quality models is the fact that data teams aren’t able to fully utilize private data,” said Fortanix co-founder and Chief Executive Ambuj Kumar. “Confidential AI makes that problem disappear by ensuring that highly sensitive data can’t be compromised even while in use, giving organizations the peace of mind that comes with assured privacy and compliance.”
Holger Mueller of Constellation Research Inc. told SiliconANGLE that data is essential to create reliable AI, but the problem is when that information is confidential then extra care needs to be taken in using it, meaning AI automation and insights can be slowed down.
“Speed is of the essence when trying to use AI as an innovation to run a business better,” Mueller said. “So it’s good to see a new approach to solve this issue with Fortanix using confidential computing in separate enclaves to isolate data, allowing it to be provisioned fast for AI used in next-generation applications.”
Fortanix said the service has already been tested by researchers at the University of California San Francisco, who used its platform to create a privacy-preserving analytics platform known as BeeKeeperAI.
Fortanix Confidential AI is available now in private preview ahead of its planned full launch in January.
THANK YOU