GNOME Bugzilla – Bug 678815
16-bit TIFF images are rendered and saved with incorrect levels
Last modified: 2012-08-29 18:27:12 UTC
Me again; I am trying to test my entire workflow. So it's the current master, GIMP_2_8_0-927-gacd1341 (stock Ubuntu 12.04, amd64) It seems to be a regression because I had not seen this happen before I pulled the master branch. The source of my TIFF images is ufraw (it should not matter, but urfaw makes peculiar TIFF structures that Gimp does not like, so I'm mentioning that just in case). So I save a TIFF16 image, open it in Gimp (it complains about layers and stuff, but so does convert). When rendered, the image looks too pale (as if painted with a small gamma). When I export the image as TIFF, it gets written with incorrect levels. If I first convert TIFF16 to PNG16 and open that in Gimp, it works as expected. When I convert from PNG16 to TIFF16 and open that in Gimp, it again looks too pale (essentially eliminating the suspicion that the original TIFF structure has anything to do it). If I convert it to TIFF8 and load that in Gimp, it looks normal.
Created attachment 217242 [details] samle image illustrating the difference in levels This image is shown with incorrect levels
Created attachment 217243 [details] reference image This image is rendered correctly
To be able to load the tiff attached as you expect I have to apply the following patch: diff --git a/plug-ins/common/file-tiff-load.c b/plug-ins/common/file-tiff-load.c index f1181c1..3fc49af 100644 --- a/plug-ins/common/file-tiff-load.c +++ b/plug-ins/common/file-tiff-load.c @@ -720,7 +720,7 @@ load_image (const gchar *filename, else if (bps == 16 && alpha) base_format = babl_format ("RGBA u16"); else if (bps == 16 && !alpha) - base_format = babl_format ("RGB u16"); + base_format = babl_format ("R'G'B' u16"); break; #if 0 @@ -1483,7 +1483,7 @@ load_contiguous (TIFF *tif, GEGL_ABYSS_NONE); gegl_buffer_iterator_add (iter, channel[i].buffer, GEGL_RECTANGLE (x, y, cols, rows), - 0, NULL, + 0, channel[i].format, GEGL_BUFFER_WRITE, GEGL_ABYSS_NONE); while (gegl_buffer_iterator_next (iter)) This clearly is insufficient, since there are many other code paths. What is interesting is that it is necessary to explicitly specify the format in the call to gegl_buffer_iterator_add. Don't know whether that's another bug or not.
It is correct to explicitly pass the format because it is the format of the data written, which for images in 16 bits precision differs from the format used internally by GIMP. Images in 8 bits precision are internally in sRGB colorspace, images in 16 bits precision are internally in linear RGB. http://git.gnome.org/browse/gimp/tree/app/gegl/gimp-babl.c#n309
If TIFF really stores gamma corrected 16 bit values, this change is correct.
slightly more extensive fix committed to git: commit 3d739b0cd28a74f78b2534aa88040dcbbc6f2810 Author: Simon Budig <simon@budig.de> Date: Wed Aug 29 19:04:13 2012 +0200 assume gamma-data in tiffs.