AI
AI
AI
Generative AI is democratizing application development, putting code-generation tools in the hands of virtually any developer. But as AI-assisted development accelerates, the enterprise technology industry faces a harder question: How do you build trusted AI development practices around code that AI writes for you?
The answer, increasingly, is a discipline unto itself. Verification, security and production-readiness are becoming as central to AI-assisted development as the code generation itself — and enterprises that skip those steps are finding out why, according to Jenny Tsai-Smith (pictured), senior vice president of product management at Oracle Corp.
“Vibe coding is fun, but is it safe? That’s the question,” Tsai-Smith said. “Once you generate 10,000 lines of code in less than 10 minutes, can you actually just go deploy it and run it and have it manage your bank system? No, you’re not going to do that. It’s about using AI in a way that you can then take the generative code and trust it and deploy it.”
Tsai-Smith spoke with theCUBE’s Dave Vellante at the Oracle Data Deep Dive NYC event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how Oracle is approaching trusted AI development, agent memory infrastructure and open data interoperability. (* Disclosure below.)
Oracle’s answer to trusted AI development centers on moving security enforcement out of the application tier and into the database itself. Rather than relying on application-layer controls that can be bypassed by dynamically generated queries, the company is pushing access rules down to the row and column level — tying them directly to end-user identities, Tsai-Smith explained. That architecture means that no matter whether a query originates from a human, a large language model or an AI agent arriving via a Model Context Protocol connection, the database enforces what data can actually be seen.
“Trust really means confidence — confidence in the correctness of the data, in the correctness of the access control of the data and then correctness in terms of the outcome of using that data,” she said. “We’re building data protection within the database. You’re putting in essentially end-user specific data access rules in the database, and we propagate the user identity from beginning to end into the database.”
The upcoming Oracle Deep Data Security feature operationalizes that philosophy with declarative, database-native controls that enforce access privileges at the row and column level even against AI-generated SQL, Tsai-Smith noted. On the development side, the APEX AI Application Generator generates human-readable pseudo-code that developers can review and modify before final generation — a deliberate checkpoint for trusted AI development.
“We actually have a tool that we’ve offered for quite a long time called Oracle APEX,” she said. “Pretty soon we will have that tool be able to generate applications in a way that allows the application developer to look at an interim version of the generated code — so it’s human-readable — that you can look through and make modifications and then generate the code.”
For agentic workloads, Oracle is also introducing the AI Private Agent Factory, a no-code canvas where data analysts and scientists — not just developers — can drag, drop and wire together multi-agent workflows, then test, deploy and monitor them inside a single tool. Alongside that, Unified Memory Core taps Oracle’s converged database architecture to give agents both long-term and short-term memory across relational, graph and spatial formats. On openness, Oracle is extending its AI vector similarity search to data stored in Apache Iceberg tables — allowing enterprises to query vectors held in cheap object storage alongside data in the Oracle AI Database without moving it. All of these offerings circle back to the same imperative, according to Tsai-Smith.
“Think about and take a look at the capabilities that we have delivered through Oracle AI Database 26ai and see if some of those things that we’ve introduced could help you do a better job in leveraging AI and also trusting [the output],” Tsai-Smith said. “The whole notion of being able to verify what’s being generated and to be able to trust the generated code is really important.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Oracle Data Deep Dive NYC:
(* Disclosure: TheCUBE is a paid media partner for the Oracle Data Deep Dive NYC event. Neither Oracle, the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.