NTSCC++: Improved Nonlinear Transform Source-Channel Coding

NTSCC++

Improved Nonlinear Transform Source-Channel Coding

IEEE Journal of Selected Topics in Signal Processing

Beijing University of Posts and Telecommunications

Recent deep learning methods have led to increased interest in solving high-efficiency end-to-end transmission problems. These methods, we call nonlinear transform source-channel coding (NTSCC), extract the semantic latent features of source signal, and learn entropy model to guide the joint source-channel coding with variable rate to transmit latent features over wireless channels. In this paper, we propose a comprehensive framework for improving NTSCC, thereby higher system coding gain, better model compatibility, more flexible adaptation strategy aligned with semantic guidance are all achieved.

Overview

pipeline

The whole NTSCC++ system architecture for semantic communications, which includes the semantic analysis transform ga, the contextual JSCC encoder fe, the contextual JSCC decoder fd, the semantic synthesis transform gs, and the checkerboard context entropy model.
Compared to the original NTSCC system, three improvement methods are marked with circled numbers:
denotes the contextual modeling and contextual JSCC toward higher system coding gain (NTSCC+);
denotes the rate compatible and channel adaptive strategy to enable compatible NTSCC+;
denotes the online latent feature and JSCC codec editing methods to enable more flexible adaptability (NTSCC++).

Partial rate-distortion results

We compare the rate-distortion performance of different coded transmission schemes over the AWGN channel at SNR = 10dB. Our contextual NTSCC model (NTSCC+) and its online adapted version (NTSCC++) show powerful performance when compared to the separately-designed coded transmission schemes. To the best of our knowledge, this is the first end-to-end transmission system outperforming the SOTA VTM + 5G LDPC scheme on the classical PSNR metric.

Kodak dataset
shapenet results animated
CLIC21 testset
shapenet results animated

Related Links

Bibtex

Acknowledgements

This website is inspired by the template of Pixel Nerf. Please send any questions or comments to sixian@bupt.edu.cn .