Victoria Gayton
Latest from Victoria Gayton
Enterprises tightening the stack as AI workloads surge
Enterprises are rebuilding their digital foundations as artificial intelligence accelerates demand for smarter, more resilient AI infrastructure. Across the industry, teams are reconsidering how tightly their systems need to work together to keep up with modern workloads. The pressure to move faster and handle more data is reshaping expectations for how next-generation platforms are designed, ...
Three insights you might have missed from theCUBE’s coverage of KubeCon + CloudNativeCon NA
For years, cloud computing’s entire pitch was “forget about hardware.” Kubernetes doubled down on that promise, abstracting infrastructure into something developers could safely ignore. But AI workloads don’t play by those rules. Inference engines, agentic systems and foundation models are pulling hardware back into the conversation — and this time, ignoring it isn’t an option. ...
Three insights you may have missed from theCUBE’s coverage of Celosphere 25
Enterprises are building the connective tissue that lets data, processes and decisions flow as one system. Intelligence is no longer confined to isolated dashboards or static reports. Now, it moves across silos in real time, fueled by AI agents and process intelligence platforms that map how work actually happens. It’s the digital equivalent of the ...
What to expect during AWS re:Invent: Join theCUBE Dec. 2-4
Enterprise infrastructure is hitting an inflection point, where production-scale workloads, purpose-built silicon and strategic ecosystem partnerships converge to power the next generation of intelligent systems. Organizations are moving beyond experimentation with AI, building deployments that demand massive compute, structured development practices and AI infrastructure to support autonomous agents at scale. One area of focus at AWS ...
Unified application intelligence reshapes how enterprises run mixed workloads
Infrastructure strategy is tilting toward unified application intelligence as enterprises try to keep up with increasingly mixed workloads across data centers and public clouds. Hybrid patterns are now the norm rather than the exception, and organizations are looking for ways to run virtual machines, containers and serverless side by side without multiplying complexity, according to Sudeep ...
What to expect during Microsoft Ignite: Join theCUBE Nov. 20
The infrastructure conversation is shifting. For years, organizations treated content management as a storage problem: more capacity, better organization and faster retrieval. But as AI agents move from experimentation into operational roles, the challenge isn’t just holding information anymore. It’s about infrastructure that knows what it contains, understands context without constant human curation and serves ...
Representation takes center stage as Merge Forward amplifies underrepresented voices in open-source communities
Open-source communities have spent years building powerful technology platforms, but tech community accessibility extends far beyond code. Creating truly inclusive spaces means addressing the full spectrum of human experience — from neurodiversity to communication needs — and ensuring that underrepresented voices shape the solutions designed to support them. Merge Forward, a new coalition launched two ...
Red Hat’s agentic AI strategy tackles enterprise AI ROI challenges
Enterprises have spent the past year spinning up artificial intelligence pilots, watching costs spiral and wondering when proof-of-concept magic will translate into production value. The gap between experimental models and scalable enterprise AI deployment has become an expensive problem with no clear path forward. Red Hat Inc.’s latest platform release targets that bottleneck head-on, emphasizing ...
Scaling smarter: Nvidia and Portworx advance self-service data management for Kubernetes
In enterprise R&D environments, teams need self-service infrastructure that scales without friction, enabling thousands of developers to spin up resources, test at velocity and meet deadlines without waiting on tickets or risking downtime. Nvidia Corp.’s platform engineering teams support chip design, firmware development and AI training workloads across on-premises and cloud Kubernetes clusters, all operating ...
Three insights you might have missed from theCUBE’s coverage from Nvidia’s GTC Washington, D.C. event
Enterprise infrastructure is no longer measured in servers or racks but in the architectural decisions that determine what those systems can accomplish at scale. As organizations race to deploy artificial intelligence workloads, the question is less about acquiring technology and more about integrating it through reference architectures that support rapid iteration without forcing teams to ...








