2

I have a 16-bits sample between -32768 and 32767. To save space I want to convert it to a 8-bits sample, so I divide the sample by 256, and add 128.

-32768 / 256 = -128 + 128 = 0
32767 / 256 = 127.99 + 128 = 255.99

Now, the 0 will fit perfectly in a byte, but the 255.99 has to be rounded down to 255, causing me to loose precision, because when converting back I'll get 32512 instead of 32767.

How can I do this, without loosing the original min/max values? I know I make a very obvious thought error, but I cant figure out where the mistake lies.

And yes, ofcourse I'm fully aware I lost precision by dividing, and will not be able to deduce the original values from the 8-bit samples, but I just wonder why I don't get the original maximum.

4

3 回答 3

1

The answers for down-sampling have already been provided.

This answer relates to up-sampling using the full range. Here is a C99 snippet demonstrating how you can spread the error across the full range of your values:

#include <stdio.h>

int main(void)
{
    for( int i = 0; i < 256; i++ ) {
        unsigned short scaledVal = ((unsigned short)i << 8) + (unsigned short)i;
        printf( "%8d%8hu\n", i, scaledVal );
    }
    return 0;
}

It's quite simple. You shift the value left by 8 and then add the original value back. That means every increase by 1 in the [0,255] range corresponds to an increase by 257 in the [0,65535] range.

I would like to point out that this might give worse results than you began with. For example, if you downsampled 65280 (0xff00) you would get 255, but then upsampling that would give 65535 (0xffff), which is a total error of 255. You will have similarly large errors across most of the higher end of your data range.

You might do better to abandon the notion of going back to the [0,65535] range, and instead round your values by half. That is, shift left and add 127. This means the error is uniform instead of skewed. Because you don't actually know what the original value was, the best you can do is estimate it with a value right in the centre.

To summarize, I think this is more mathematically correct:

unsigned short scaledVal = ((unsigned short)i << 8) + 127;
于 2013-06-07T04:35:06.913 回答
0

You don't get the original maximum because you can't represent the number 256 as an 8-bit unsigned integer.

于 2013-06-07T01:13:44.847 回答
0

if you're trying to compress your 16 bit integer value into a 8 bit integer value range, you take the most significant 8 bits and keep them while throwing out the least significant 8 bits. Normally this is accomplished by shifting the bits. A >> operator is a shift from most to least significant bits which would work if used 8 times or >>8. You can also just mask out the bytes and divide off the 00s doing your rounding before your division, with something like 8BitInt = (16BitInt & 65280)/256; [65280 a.k.a 0xFF00]

Every bit you shift off of a value halves it, like division by 2, and rounds down.

All of the above is complicated some by the fact that you're dealing with a signed integer.

Finally I'm not 100% certain I got everything right here because really, I haven't tried doing this.

于 2013-06-07T01:32:48.467 回答