0

Well, I have to make a PPM image compressor using the LZW algorithm, the algorithm and the code I understand and have implemented a version for strings (in Java,for tests).

The big problem is in compression, because if I have:

Input: ABCDABCDABCDABCD

Output: 65 66 67 68 256 258 260 259 257 68

as for my input I have 16 characters, if I just save my output as a text file, no compression because there are 34 characters. So I thought I'd save as binary file and then burn each field in a byte of my file, but there is the problem of fields >= 256. I also thought the idea of ​​putting some fields set to occur when a value greater than 255 , something like that. :

(in this case I would remove the 255 field and 0 in my dictionary)

 65 66 67 68 256 258 , would be:

 65 66 67 68 255 1 0 255 3 0

then every field equal to 255 indicate an occurrence of consecutive bytes and sum would be the elements to be added; byte 0 indicates the end of sum.

The problem is that even so my output would be greater than the input and compression does not make sense. Then I would ask you if there is a way around this.

4

1 回答 1

3

LZW 具有可预测的下一个数据块大小。所以你的编码器应该保存保存当前代码所需的位数。您的解码器应该计算下一个代码的位数并读取该位数。

所以你的解码器看起来像

while (read next block){
   read next block bits size
   decode character block
   update dictionary
   next block bits size = dictionary next code bit size
}

压缩算法永远不会在通用情况下工作。LZW 适用于高度重复的数据。对于非重复数据,压缩数据大小将始终大于初始数据大小。

于 2014-05-02T02:06:28.537 回答