GNOME Bugzilla – Bug 92918
gdk_rgb_xpixel_from_rgb_internal() ignores low order bits
Last modified: 2004-12-22 21:47:04 UTC
When the function gdk_rgb_xpixel_from_rgb_internal() converts from RGB to TrueColor, it ignores the low order bits of the input. On my 5:6:5 visual this produces wrong colors in certain cases; for example the color resulting from the rgb input (63568, 63568, 63568) is a pale purple, not gray, because of the rounding error on the green channel is different than that on the red and blue. I'm attaching a patch.
Created attachment 10992 [details] [review] Patch
Yes, the two algorithms give different results, but that doesn't mean that one of them is wrong. Half the pure grays _inevitably_ become non-gray when converting to 565. All you can do is change which grays these are. A detailed discussion of converting color values between different ranges can be found at: http://people.redhat.com/otaylor/pixel-converting.html The change you are proposing is (more or less) going from method #1 (equal division into intervals) to method #2 (values at center of intervals). A straight downshift (method #1) is the conventional way of reducing color depth, and if for no other reason, that's what I think we should continue to use here.
Created attachment 10998 [details] rounding diagram
(I may be missing something, but ..) The attached image shows the rounding error with the two methods. The blue curve shows the rounding error with method #2, a normal rounding to nearest integer, the combined red+blue curve shows the rounding error for method #1 (divide the target interval into N+1 segments). Clearly, the rounding error is smaller with method #2. Wrt. the grays becoming non-gray, yeah, it is actually obvious that half the pure grays will have to become non-gray if we are going to use all 6 bits. I should have realized that. Can't we detect this situation? Ie, when the function is converting a gray into a pixel with unequal numbers of bits, pretend all channels have the same number of bits as the one with the smallest number, and then shift suitably? It looks pretty bad when a gray becomes purple or green. For non-gray input colors the errors are not as bad because the change in hue is not as visible.hue is not as visible.
Hmm, your diagram doesn't look quite accurate to me :-), but I agree with the point that method #2 produces a lower maximum error. Remember that in practice, the difference betweeen 6-bit and 5-bit is either 1/64'th or 0, so the value of the floating point difference from "ideal values" isn't particularly importannt. I think being consistent with how everybody else does it, and in particular how GdkRGB does it, is important, though. We really can't do a couple of floating point operations per pixel in GdkRGB. People who care tend to pick colors that (when converted by the standard method) are pure grays in 565, considering how common 565 is.