IBC2025 Session Summary: AI, Multimodality, and the Intelligent Edge

Moderator Maycock (NM Advisors) was joined by Anton Dvorkovich (Dubformer), Barbara Marshall (HP), Carlos Ramirez Suarez (Ramblox Limited) and Sujatha Gopal (Tata Consultancy Services) on the Future Tech stage on Monday morning. The session examined the integration of artificial intelligence with multimodality at the intelligent edge, considering its implications for media production and distribution.

 The speakers outlined how multimodal AI, combining text, audio, video, and other formats, can enrich storytelling and improve accuracy in both content creation and audience engagement. They highlighted the shift from passive to adaptive storytelling, stressing the need for collaboration among creators, distributors, and technology providers to deliver value across the media chain. The deployment of AI at the edge was identified as vital in reducing latency, supporting real-time processing, and heightening audience immersion, particularly in live contexts such as sport and news.

A large part of the discussion focused on the technical foundations and advantages of multimodal AI and edge computing. The panel underscored advances in AI capabilities, including greater accuracy in tasks such as real-time captioning and speaker derivation through the use of multimodal inputs. They also emphasised the benefits of processing data locally at the edge, citing examples such as the Nvidia DGX Nano device and hybrid models that balance edge and cloud computing. This method was presented as both cost-efficient and secure, while maintaining high accuracy and minimising latency.

Summary created by Voxo. Headline and standfirst furnished by 365 editorial team.

Latest IBC Show
Favourites:

Registered users only: Login

Share this:
Other themes: