Abstract

This paper presents the first-ever study of adapting compressed image latents to suit the needs of downstream vision tasks that adopt Multimodal Large Language Models (MLLMs). MLLMs have extended the success of large language models to modalities (e.g. images) beyond text, but their billion scale hinders deployment on resource-constrained end devices. While cloud-hosted MLLMs could be available, transmitting raw, uncompressed images captured by end devices to the cloud requires an efficient image compression system. To address this, we focus on emerging neural image compression and propose a novel framework with a lightweight transform-neck and a surrogate loss to adapt compressed image latents for MLLM-based vision tasks. Given the huge scale of MLLMs, our framework excludes the entire downstream MLLM except part of its visual encoder from training our system. This stands out from most existing coding for machine approaches that involve downstream networks in training and thus could be impractical when the networks are MLLMs. The proposed framework is general in that it is applicable to various MLLMs, neural image codecs, and multiple application scenarios, where the neural image codec can be (1) pre-trained for human perception without updating, (2) fully updated for joint human and machine perception, or (3) fully updated for only machine perception. Extensive experiments on different neural image codecs and various MLLMs show that our method achieves great rate-accuracy performance with much less complexity.


Rate-distortion Results

The proposed method significantly improves rate-accuracy performance for MLLM-based vision tasks by efficiently adapting compressed image latents. Compared to using reconstructed images from codecs optimized for human perception, it achieves up to 60-80% bit-rate reductions at the same accuracy performance. Compared with image post-processing baseline, the lightweight transform-neck achieves competitive performance while reducing decoding complexity by nearly 95% in terms of multiply-accumulate operations (kMAC/pixel). The method generalizes well across different MLLMs, neural image codecs, and tasks, demonstrating superior performance By allowing the encoder to be optimizeding for machine perception (d2, d3), the approach achieves recognition performance accuracy closer to uncompressed images while reducing transmission cost.



Qualitative Comparison

The visualizations highlight the effectiveness of the proposed method in image captioning, visual question answering (VQA), referring expression comprehension (REC), and few-shot classification. Compared to standard reconstruction and post-processing baselines which show different artifacts in reconstructed images, our proposed method consistently generates more accurate and semantically meaningful outputs, even at low bitrates without the need for image reconstruction.