Tech Papers 2025: This paper discusses the Avatar Representation Format (ARF), the goal of which is to offer an interoperable exchange format for the storage, carriage, and animation of 3D avatars.
Abstract
Immersive communication and the vision of the “Metaverse” rely on the ability for users to share experiences in 3D virtual and mixed-reality spaces. A crucial aspect of these experiences is the use of avatars – digital representations of users that can convey presence, identity, and expression. Avatars allow people to embody virtual characters or versions of themselves in shared environments, enabling natural interaction and social communication even when physically apart. As augmented reality (AR) and virtual reality (VR) technologies advance, the importance of a standardized avatar format grows. An interoperable avatar format would ensure that users can maintain a consistent digital persona across different applications, platforms, and devices, fostering a seamless 3D communication experience. The Moving Picture Experts Group (MPEG) – within ISO/IEC – has been developing standards for immersive media (MPEG-I), addressing video, audio, and 3D scene representations. Recognizing the need for a common avatar framework, MPEG initiated the Avatar Representation Format (ARF) as part 39 of the MPEG-I ISO/IEC 23090 suite. The goal of ARF is to offer an interoperable exchange format for the storage, carriage, and animation of 3D avatars. In other words, ARF seeks to enable a user’s 3D avatar to be stored, transmitted, and animated in a standard way, so that any conformant system can interpret and render that avatar. By providing a unified format for both the avatar’s visual model and its animations, ARF stands to greatly facilitate immersive communications – from multi-user VR meetings to shared AR experiences – where consistent avatar representation is essential.
Exclusive Content
This article is available with a Technical Paper Pass
Opportunities for emerging 5G and wifi 6E technology in modern wireless production
This paper examines the changing regulatory framework and the complex technical choices now available to broadcasters for modern wireless IP production.
Leveraging AI to reduce technical expertise in media production and optimise workflows
Tech Papers 2025: This paper presents a series of PoCs that leverage AI to streamline broadcasting gallery operations, facilitate remote collaboration and enhance media production workflows.
Automatic quality control of broadcast audio
Tech Papers 2025: This paper describes work undertaken as part of the AQUA project funded by InnovateUK to address shortfalls in automated audio QC processes with an automated software solution for both production and distribution of audio content on premises or in the cloud.
Demonstration of AI-based fancam production for the Kohaku Uta Gassen using 8K cameras and VVERTIGO post-production pipeline
Tech Papers 2025: This paper details a successful demonstration of an AI-based fancam production pipeline that uses 8K cameras and the VVERTIGO post-production system to automatically generate personalized video content for the Kohaku Uta Gassen.
EBU Neo - a sophisticated multilingual chatbot for a trusted news ecosystem exploration
Tech Papers 2025: The paper introduces NEO, a sophisticated multilingual chatbot designed to support a trusted news ecosystem.
