Investigating and exploring context-tree weighting
Explaining CTW is a little tough
project presentation
btw here's the code for CTW !
Characters | ASCII | Frequency | HUFFMAN | bits |
---|---|---|---|---|
'A' | 065 | 1 | 00 | 01000001 |
'g' | 103 | 1 | 01 | 01100111 |
'i' | 108 | 1 | 11 | 01101100 |
'o' | 111 | 1 | 10 | 01101111 |
And here's the code for Huffman Coding compressor and decompressor
algorithms | size(bytes) |
---|---|
ORIGINAL (ENWIK 4) | 100326 |
GZIP (LZ77 and LZ78) | 26780 |
HUFFMAN | 53967 |
7ZIP (LZMA and LZMA2) | 23806 |
Assume that there is an explaination for autoencoder here
assume that there is a table of results here
here is the code for AutoEncoder and the PreTrained Model
The Barf Thingy can be found here.
- An Autoencoder-based Learned Image Compressor: Description of Challenge Proposal by NCTU
- Text Encryption with Huffman Compression
- The context-tree weighting method: basic properties
- Context Tree Switching
- Hutter prize
- improve the compression
- manupulate the huffman thing for the project !
- Audio compression (converting it to spectograms can help maybe)
- read the compression part
- decompress does not work maybe
- make it visual with some python lib