Does anyone know how to calculate the error of quantizing from 16bit to 8bit?
I have looked at the Wikipedia article about Quantization, but it doesn’t explain this.
Can anyone explain how it is done?
Lots of love,
Louise
Update: My function looks like this.
unsigned char quantize(double d, double max) {
return (unsigned char)((d / max) * 255.0);
}
It is there in the Wikipedia article, expressed as signal to noise ratio. But I guess the real question is, in what units do you want the result? As a signal to noise power ratio, it’s 20 log(2^8) = 55 dB
You probably need to read this: http://en.wikipedia.org/wiki/Decibel