Skip to content

Commit

Permalink
actual stable release
Browse files Browse the repository at this point in the history
  • Loading branch information
Justin-Tan committed Sep 13, 2020
1 parent 1b4e184 commit 50070fe
Show file tree
Hide file tree
Showing 5 changed files with 13 additions and 13 deletions.
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,13 +4,13 @@ Pytorch implementation of the paper ["High-Fidelity Generative Image Compression

## About

This repository defines a model for learnable image compression based on the paper ["High-Fidelity Generative Image Compression" (HIFIC) by Mentzer et. al.](https://hific.github.io/). The model is capable of compressing images of arbitrary size and resolution while maintaining perceptually similar reconstructions that tend to be more visually pleasing than standard image codecs operating at higher bitrates.
This repository defines a model for learnable image compression based on the paper ["High-Fidelity Generative Image Compression" (HIFIC) by Mentzer et. al.](https://hific.github.io/). The model is capable of compressing images of arbitrary spatial dimension and resolution up to two orders of magnitude in size, while maintaining perceptually similar reconstructions that tend to be more visually pleasing than standard image codecs operating at higher bitrates.

This repository also includes a partial port of the [Tensorflow Compression library](https://github.com/tensorflow/compression) which provides general tools for neural image compression in Pytorch.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb)

You can play with a [demonstration of the model in Colab](https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb), where you can compress your own images.
You can play with a [demonstration of the model in Colab](https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb), where you can upload and compress your own images.

## Example

Expand Down Expand Up @@ -77,7 +77,7 @@ python3 compress.py -i path/to/image/dir -ckpt path/to/trained/model --reconstru

### Pretrained Models

* Pretrained model weights using the OpenImages dataset can be found below (~2 GB). The examples at the end of this readme were produced using the `HIFIC-med` model. For usage instructions see the [user's guide](assets/USAGE_GUIDE.md). The same models are also hosted in the following Zenodo repository: https://zenodo.org/record/4026003 .
* Pretrained model weights using the OpenImages dataset can be found below (~2 GB). The examples at the end of this readme were produced using the `HIFIC-med` model. For usage instructions see the [user's guide](assets/USAGE_GUIDE.md). The same models are also hosted in the following Zenodo repository: https://zenodo.org/record/4026003.

| Target bitrate (bpp) | Weights | Training Instructions |
| ----------- | -------------------------------- | ---------------------- |
Expand Down Expand Up @@ -141,13 +141,14 @@ The last two show interesting failure modes: small figures in the distance are a
* Justin Tan

### Acknowledgements

* The compression routines under `src/compression/` are derived from the [Tensorflow Compression library](https://github.com/tensorflow/compression).
* The rANS encoder implementation under is based on the [Craystack repository](https://github.com/j-towns/craystack).
* The vectorized rANS encoder implementation under is based on the [Craystack repository](https://github.com/j-towns/craystack).
* The code under `src/loss/perceptual_similarity/` implementing the perceptual distortion loss is based on the [Perceptual Similarity repository](https://github.com/richzhang/PerceptualSimilarity).

### Contributing

All content in this repository is licensed under the Apache-2.0 license. Feel free to submit any corrections or suggestions as issues.
All content in this repository is licensed under the Apache-2.0 license. Please open an issue if you encounter unexpected behaviour, or have corrections/suggestions to contribute.

## Citation

Expand Down
4 changes: 3 additions & 1 deletion assets/HiFIC_torch_colab_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -519,7 +519,9 @@
"# HiFIC Demo\n",
"Compress arbitrary images in Colab using a pretrained neural compression model. This is a Pytorch port of the [High-Fidelity Image Compression](https://hific.github.io/) project - see the [Github repo](https://github.com/Justin-Tan/high-fidelity-generative-compression) for the source.\n",
"\n",
"Execute all cells in sequence to see the results of compression on a default image, or upload your own images to be compressed.\n"
"Execute all cells in sequence to see the results of compression on a default image, or upload your own images to be compressed by following the steps in the notebook.\n",
"\n",
"Some sample reconstructions from the compressed format can be found [here](https://github.com/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/EXAMPLES.md). For detailed usage instructions please see [the user's guide](https://github.com/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/USAGE_GUIDE.md).\n"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions assets/USAGE_GUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ This repository defines a model for learnable image compression capable of compr

The model is then trained end-to-end by optimization of a modified rate-distortion Lagrangian. Loosely, the model can be thought of as 'amortizing' the storage requirements for an generic input image through training a learnable compression/decompression scheme. The method is further described in the original paper [[0](https://arxiv.org/abs/2006.09965)]. The model is capable of yielding perceptually similar reconstructions to the input that tend to be more visually pleasing than standard image codecs which operate at comparable or higher bitrates.

The model weights range between 1.5-2GB on disk, making transmission of the model itself impractical. The idea is that the same model is instantiated and made available to a sender and receiver. The sender encodes messages into the compressed format, which is transmitted via some channel to the receiver, who then decodes the compressed representation into a lossy reconstruction of the original data.

This repository also includes a partial port of the [Tensorflow Compression library](https://github.com/tensorflow/compression) for general tools for neural image compression.

## Training
Expand Down Expand Up @@ -99,7 +101,7 @@ python3 compress.py -i path/to/image/dir -ckpt path/to/trained/model --save

* Network architectures can be modified by changing the respective files under `src/network`.
* The entropy model for both latents and hyperlatents can be changed by modifying `src/network/hyperprior`. For reference, there is an implementation of a discrete-logistic latent mixture model instead of the default latent mean-scale Gaussian model.
* The exact compression algorithm used can be replaced with any entropy coder that makes use of indexed probability tables.
* The exact compression algorithm used can be replaced with any entropy coder that makes use of indexed probability tables. The default is a vectorized rANS coder which encodes overflow values using a variable-length code, but this behaviour is costly.

## Notes

Expand All @@ -117,7 +119,6 @@ Feel free to submit any questions/corrections/suggestions/bugs as issues. Pull r

To take a look under the hood, you can play with the [demonstration of the model in Colab](https://colab.research.google.com/github/Justin-Tan/high-fidelity-generative-compression/blob/master/assets/HiFIC_torch_colab_demo.ipynb), and compress your own images.


### References

The following additional papers were useful to understand implementation details.
Expand All @@ -132,5 +133,4 @@ The following additional papers were useful to understand implementation details

* Investigate bit overhead in vectorized rANS implementation.
* Include `torchac` support for entropy coding.
* Implement universal code for overflow values.
* Rewrite rANS implementation for speed.
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
absl-py==0.9.0
autograd==1.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
Expand Down
4 changes: 0 additions & 4 deletions src/compression/ans.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,10 +3,6 @@
Based on https://arxiv.org/abs/1402.3392
x: compressed message, represented by current state of the encoder/decoder.
x = (s, t)
s: int in range [2**(s_prec -t_prec), 2**t_prec)
t: Immutable stack containing ints in range [0, 2**t_prec)
Precisions satsify t_prec < s_prec
precision: the natural numbers are divided into ranges of size 2^precision.
start & freq: start indicates the beginning of the range in [0, 2^precision-1]
Expand Down

0 comments on commit 50070fe

Please sign in to comment.