6

A data-driven approach to JND detection for perceptual video encoding

Tech Papers 2025: This paper introduces IMAX-JNDNet, a deep learning framework that predicts segment-level JND decisions by analyzing compressed video triplets.

ABSTRACT

Just Noticeable Difference (JND) defines the threshold at which video compression artifacts become perceptible to human vision. While traditional metrics such as PSNR and SSIM offer general quality predictions, they often fail to capture fine-grained perceptual differences essential for streaming applications. To address this, we introduce IMAX-JNDNet, a deep learning framework that predicts segment-level JND decisions by analyzing compressed video triplets. The model processes reference, anchor, and test frames through a U-Net-like architecture and uses a learnable spatial pooling mechanism to generate frame-level perceptibility maps, which are then aggregated temporally. To support training and evaluation, we construct IMAX-JND-Data, a modern JND dataset comprising 205 highquality video clips, encoded under realistic streaming conditions and labeled via expert-based subjective studies. Experimental results show that our model achieves an average bitrate saving of 19.56%, outperforming state-of-the-art methods by 15.26%, while preserving perceptual indistinguishability.

Latest Technical paper
2

Automatic quality control of broadcast audio

Tech Papers 2025: This paper describes work undertaken as part of the AQUA project funded by InnovateUK to address shortfalls in automated audio QC processes with an automated software solution for both production and distribution of audio content on premises or in the cloud.

Read more
Favourites:

Registered users only: Login

Share this:
Other themes: