UPDATED 17:21 EDT / MAY 01 2026

AI inference is transforming live media production, enabling real-time video formats, faster engagement and scalable AI-driven workflows across platforms. AI

Bringing real-time AI into live video workflows: AWS Elemental Inference at NAB

Media and entertainment is undergoing a structural shift driven by fragmented audiences, new content formats and the growing role of AI in production workflows. The traditional broadcast model — built around scheduled programming and horizontal viewing — no longer reflects how audiences consume content.

Platforms such as TikTok, Instagram Reels and YouTube Shorts have normalized vertical video as a primary format, particularly among younger viewers. For organizations such as Fox Corp., this creates a dual challenge: delivering content across multiple platforms while managing cost and operational complexity.

At the same time, AI has moved beyond experimentation. The real value now lies at inference — where AI operates in real time within production environments, shaping outputs as content is created.

Scaling live production across formats

Live production, especially in sports, is one of the most complex areas in media. Content must be captured, formatted and distributed instantly across multiple channels.

Historically, adapting 16:9 broadcast feeds into vertical formats required dedicated teams, manual framing and post-event editing. This approach struggles to keep up with audience expectations for real-time highlights and engagement during live events.

Fox addressed this by building dedicated “vertical control rooms” to support social content. While effective, the model was labor-intensive and limited how much focus teams could place on storytelling.

Across the industry, the pressure points are consistent:

  • Latency reduces engagement
  • Manual workflows limit scale
  • Fragmented processes reduce return

Meeting demand now requires real-time transformation built directly into the production pipeline.

Embedding AI into the workflow

The collaboration between Amazon Web Services Inc. and Fox focused on integrating AI inference into live production rather than adding it as a separate step.

The goal was to:

  • Generate multiple formats from a single livestream
  • Identify key moments in real time
  • Enable immediate distribution across platforms
  • Reduce manual effort while maintaining editorial control

This approach positions AI as a set of supporting agents that handle repetitive tasks, allowing production teams to focus on creative decisions.

Elemental Inference in action

AWS introduced Elemental Inference as an extension of its Elemental media services, integrating directly into tools such as MediaLive. Instead of a standalone system, inference becomes part of the production stack.

Key capabilities include:

  • Real-time vertical transformation: Dynamically reframes live video by tracking subjects and motion, creating a natural vertical viewing experience
  • Live highlight detection: Identifies key moments as they happen, enabling rapid clipping and publishing
  • Parallel outputs: Produces horizontal and vertical formats simultaneously, reducing turnaround time
  • Integrated distribution: Supports editing and publishing during live events to drive immediate engagement

While sports is the primary use case, the same model applies to news and talk programming through speaker and scene detection.

From efficiency to return on AI

Elemental Inference improves operational efficiency by reducing manual workflows and accelerating time-to-publish. More importantly, it increases engagement by enabling real-time content distribution across platforms.

It also shifts how teams work. By offloading repetitive tasks, production staff can focus on storytelling and audience engagement. At the same time, tighter integration between content and audience data creates a feedback loop: Create → distribute → analyze → optimize in real time

The bigger shift

At NAB 2026, inference emerged as a foundational layer in media production. AI is no longer an add-on; it is becoming embedded infrastructure.

For media organizations, the implications are clear:

  • Inference is becoming a control layer for live content
  • Speed and format flexibility are now competitive requirements
  • AI-driven workflows enable both efficiency and creative scale

This pattern extends beyond media. Across industries, the value of AI is realized when it is integrated into real-world systems and workflows.

For more in-depth insights, visit theCUBEresearch.com.


A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.