

To ensure that artificial intelligence is used ethically, responsibly and securely, organizations must adopt safe AI practices to prevent harm and unintended consequences.
As AI’s role in business and daily life expands, integrating responsible safeguards helps maintain its potential as a tool for progress rather than a source of risk. Trustwise Inc. is leading efforts in this area, according to Manoj Saxena (pictured), founder, chairman and chief executive officer of Trustwise.
“We have an 82% failure rate, so the AI projects are not getting into production because people are scared,” Saxena said. “This is not just an opportunity to build a company and make money. This is what technology should be used for … to make technology safer. That’s sort of where Trustwise is, and that’s what our mission is — to build this trust layer of AI so people can use that with confidence and people can use that with quality.”
Saxena spoke with theCUBE’s Shelly Kramer for the Tech Innovation CUBEd Awards 2025 interview series, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the need for safe AI practices and Trustwise’s role in this area.
Safe AI practices encourage responsible innovation, ensuring that organizations can benefit industries and individuals without causing unintended harm. Clear AI guidelines should be at the forefront, according to Saxena. For Trustwise’s innovations, the company received a CUBEd “Most Innovative Tech Startup Leaders” award.
“It’s awards like this and recognition from customers that keep us driving,” he said. “I fundamentally believe we are working on the most important problem in AI today, which is, ‘How do you unlock the potential of these AI systems safely for both companies and society’”?
For AI to be widely adopted, businesses and individuals need to trust it. But safe AI practices call for a delicate balancing act between innovation, responsibility and robust safeguards, according to Saxena.
“There’s nothing artificial about AI,” he said. “This is going to impact human beings … and I created this nonprofit called the Responsible AI Institute nine years ago. When ChatGPT got launched … no one was looking at, ‘How do I build the dome, the safety control systems for these if it gets out of control?’”?
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage for the Tech Innovation CUBEd Awards 2025 interview series:
THANK YOU