AI
AI
AI
Amazon Web Services Inc. is trying to carve out a niche for itself in social media livestreaming with the launch of a new service called AWS Elemental Inference — essentially a kind of AI agent that automatically transforms video footage and optimizes it for viewing on vertical displays in real time.
The company said the service will help broadcasters and livestreamers to better cater to audiences on social media platforms such as Instagram Reels, TikTok and YouTube Shorts, which usually display content in vertical formats. This causes problems for broadcasters, because most videos are still produced in the traditional landscape format, and they require extensive manual editing to optimize them for vertical displays. As a result, viewers on social media platforms often miss out on live moments as a result of broadcast delays, Amazon says.
AWS Elemental Inference makes it possible for live video streams to be optimized for vertical platforms without any manual editing. It works by analyzing video feeds in real time and automatically applying the right changes to execute the multistep transformations needed, so that viewers on social media platforms won’t miss out on any key details. The platform tracks all subjects and ensures all of the most important action remains in the video as it transforms it into vertical footage at a 9:16 aspect ratio.
The service is launching today in four AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Mumbai).
According to AWS, it can apply vertical transformations with latency of just 6-10 seconds, enabling almost real-time livestreaming on vertical screens. For broadcasters and other content creators, it means they no longer need to carry out extensive editing themselves, and can instead just focus on creating high-quality video.
AWS said the Elemental Inference service integrates with its existing AWS Elemental MediaLive platform for live video encoding, allowing users to enable AI optimizations without having to adapt their existing video architecture. It’s powered by fully-managed foundation models that are automatically updated and optimized by AWS’s engineers, so content creators don’t need any AI expertise to take advantage of it.
In addition to vertical transformations, AWS Elemental Inference also performs advanced metadata analysis in real time, allowing it to automatically detect and extract clips from live video to distribute vertically-formatted highlights in real time. For live sports, this means it can identify moments such as touchdowns, goals, game-changing plays and emotional peaks the moment they occur. The highlights can be optimized for vertical platforms almost instantly, ready for broadcast directly to social media platforms.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.