Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add the quantisation and arithmetic encoder, decoder #370

Open
neogyk opened this issue Mar 21, 2024 · 0 comments
Open

Add the quantisation and arithmetic encoder, decoder #370

neogyk opened this issue Mar 21, 2024 · 0 comments

Comments

@neogyk
Copy link
Contributor

neogyk commented Mar 21, 2024

Many of the modern neural compression architectures utilize the arithmetic encoder-decoder functions (for example ANS). This can guarantee a higher compression rate. The arithmetic encoder module encodes the stream of data into the bit stream and requires the probability distribution of input.

These functions appears as middle layer of the AE model. Usually the input of arithmetic encoder-decoder is preprocessed as quantization function, that reduce the precision of data or map it to the integer.

The optimization criterion consists of two parts - Rate of compressed stream and Distortion of reconstructed data.

I propose to add two files - quantization.py and coder.py containing the torch.Module for coressponding functions, that can used in the baler/modules/models.py.

Example of the ANS encoder-decoder implementation[1,2] and utilization[3]:

  1. The Constriction library
  2. Torchac
  3. neural-data-compression
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant