UPDATED 16:30 EDT / MARCH 16 2026

AI

Nvidia expands physical AI with communication and data processing infrastructure blueprints

Nvidia Corp. today announced blueprints for artificial intelligence training data generation to enable massive-scale processing and generation of data for the AI models needed to drive the next generation of robots.

The company also said it is partnering with T-Mobile and Nokia to work with a growing ecosystem of developers to bring robots, autonomous cars, sensors and edge applications into AI networks. The collaboration will use high-performance communication networks and AI radio access networks to distribute and deploy AI compute and apps over wide areas.

As AI moves beyond purely digital environments, the screen, reading files, summarizing emails and chatbots, and combines with sensors, cameras and robot limbs, it gains agency. This technology is called physical AI, which includes robots, drones and autonomous vehicles — systems that must interact with and understand surroundings, reason through tasks and adapt to the environment in real time.

“Physical AI is the next frontier of the AI revolution, where success depends on the ability to generate massive amounts of data,” said Rev Lebaredian, vice president of Omniverse and simulation technologies at Nvidia.

To support this need, Nvidia released a new Nvidia Physical AI Data Factory Blueprint, an open reference architecture that defines how training data is generated, augmented and evaluated for physical AI systems. It is designed to reduce costs and speed up training for the complex work of building physical AI systems at scale.

The company collaborated with Microsoft Corp. and Nebius Group N.V. to integrate the open blueprint with their cloud infrastructure. Physical AI development firms, including FieldAI, Hexagon Robotics, Linker Vision, Milestone Systems, RoboForce Inc., Skild AI, Teradyne Inc. and Uber Technologies Inc., have all signed on to use the blueprint to accelerate their own vision AI projects and autonomous vehicle development.

Under the blueprint, Nvidia said it is combining a number of technologies. Including its Cosmos world foundation model tools, it provides curated search for annotating real-world data and evaluation for automatically scoring, filtering and evaluating generated data to ensure physical accuracy. Cosmos also exponentially expands and diversifies curated data by multiplying real and simulated inputs to better capture rare and long-tail scenarios across environment and lighting conditions.

Additionally, Nvidia OSMO, an open-source orchestration framework, will help unify and manage robotics workflows across compute environments and reduce manual tasks so developers can focus on building models. The framework now integrates with coding agents such as Anthropic PBC’s Claude Code, OpenAI Group PBC’s Codex and Cursor.

Telecommunications and physical AI

Nvidia, T-Mobile US Inc. and Nokia said today they’re working to bring AI applications into the next generation of AI-RAN infrastructure.

AI-RAN, or artificial intelligence radio access networking, represents an ongoing evolution of telecommunications to transform wireless networking into platforms that can distribute edge AI computing. This will allow high-performance AI vision, computation and other capabilities across broad regions, allowing AI agents to understand the physical world across cities, utilities and industrial worksites.

T-Mobile became the first company in the United States to pilot Nvidia’s AI-RAN infrastructure with Nokia’s anyRAN software. It is now working with select Nvidia physical AI partners to demonstrate how cellular sites and mobile switches can support distributed edge AI workloads on 5G.

“Telecommunication networks are evolving into the AI infrastructure enabling billions of devices — from vision AI agents to robots and autonomous vehicles — to see, hear and act in real time,” said founder and Nvidia Chief Executive Jensen Huang.

Nvidia said AI-RAN is built to address what it believes is a critical infrastructure gap: the lack of low-latency, secure and widely available connectivity. T-Mobile’s 5G standalone network will provide wide-area backbone connectivity and Nvidia’s powered infrastructure will offload the heavy computation from devices to the nearest edge locations.

By providing near-edge compute, developers can build use cases such as smart city operations, automated utility inspection, vision-based facility management and real-time industrial safety. Although very small models can run on devices, they lack the power and intelligence to do significant processing, which means very tiny models miss important details.

Conversely, extremely large models require tremendous amounts of power and data to run, which means offloading the data across telecommunications lines to distant datacenters for processing. That adds a delay in getting critical information back to a device. The happy middle ground is running a large-enough AI model nearby on a server that could be within meters or kilometers with a delay of milliseconds or under and enough computational power to “think” about images, videos and safely react in real time.

“Turning networks into distributed AI computing platforms to unlock the full potential of Physical AI will require ultra-low latency and space-time coherency at the network edge for billions of endpoints,” said T-Mobile CEO Srini Gopalan.

At an industrial scale, nearly 1.5 billion cameras run globally, but less than 1% of the footage generated ever gets reviewed by humans. To close this gap, Nvidia introduced Metropolis VSS 3 Blueprint, which allows AI agents to reason over video from the edge to the cloud.

It allows them to decompose video to understand safety issues, lighting changes, predict potential hazards and understand specific events. An agent watching a factory floor could reason across an assembly line on the verge of failing and warn workers before an event happens on the floor. Another agent connected with a pipeline could watch for leaks and send out repair crews when wet spots appear, or send notices to disaster recovery after a major storm strikes infrastructure.

Partners using VSS to enhance safety include Caterpillar, KION, Hitachi, HCLTech, Siemens Energy, Tulip and Telit Cinterion.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.