GNOME Bugzilla – Bug 72853
Incorrect RGBA handling in Noisify plug-in
Last modified: 2005-01-01 21:56:06 UTC
If two pixels have different opacity values (alpha channel), then their colors are not averaged correctly by the Noisify plug-in. It looks like the RGB channels are resampled without taking the opacity into account. As a result, the (invisible) color of a transparent pixel can bleed into an opaque pixel. The resulting image is incorrect. See bug #70335 for some test images and a longer description of the problem. This problem affects many other tools and plug-ins.
See my comment in bug #72852. In this case no averaging happens at all. This plugin just adds some gaussian noise to individual channels with some probability.
I tend to agree with David that we should close this one as NOTABUG.
I would make similar comments to what I said in bug #72852: as it is now, the plug-in produces some results that can only be explained by knowing the internal representation of the RGBA pixels. That gives unexpected results for most users, as you can see in the screenshot attached below. If we want to allow this plug-in to work on RGBA images (I think it should, while I am not sure about the randomize plug-in), then it should probably work on pixels with pre-multiplied alpha values. I think that it would make more sense to convert the RGB values before and after the noise is added.
Created attachment 14609 [details] test2-noisify.png (68 KB) - screenshot showing noisify applied to test2 image
Please remember your image was specificaly constructed to show the case when the plug-in can do strange things. During normal Gimp drawing one often creates fully transparent areas from areas containing some reasonable color. Then the plug-in does what one expects, is useful and it would be sad not to have the possibility of alpha randomization at all. I agree one needs some basic insight to what he/she is doing to use it (not anything about Gimp internals, only about what alpha channel means at all). E.g., I may fill an area with yellow color, set its alpha to zero, and then add some alpha noise to create area containing yellow pixels of small random opacity. I don't know how to reasonably achieve this without this plug-in -- I know some ways, but none of them is simple enough for an user without any clue who we are talking about, and he/she would not be able to find them. Do you want do disable the Curves tool for RGBA images because it can be used to make the color of fully transparent pixels visible? Do you want the color picker to pop-up an error when you click on a transparent pixel? And when not, what's the difference? The transition between zero and +epsilon opacity in continuous in many cases (and dicontinuous in others). It's reasonable to change zero opacity to nonzero, provided you have a bunch of pixels of some more-or-less known color, with various opacities -- some of them may incidentally have zero opacity. Why to discriminate them? It's probably all about the definition of ,,incorrect``.
A possible definition of incorrect (regarding alpha): an act of unintentional visualization of the color of a fully transparent pixel. IMNSHO in this plug-in the visualization is intentional (as in Curves and Color picker) and thus not incorrect.
I know that my image was specifically constructed to show the problem, but it can also happen with normal images. Currently, if you start with a white image and clear it or if you start with a black image and clear it, you see the same thing (a transparent area) but the plug-in will give different results. This is what I would like to prevent. As you have correctly guessed, the way I would define "Incorrect" is that it reveals some things that should never be revealed, and over which the usual user has little or no control. So as I suggested above, the best way to fix this would be to work with pre-multiplied alpha. This would avoid all surprises because the worst thing that could happen by changing the opacity of fully transparent pixels is to make the image a bit darker. But at least it would not use unpredictable colors. It would also work correctly with partially transparent pixels. The only drawback is a slight loss of precision, but this does not matter for a plug-in that adds random noise.
Alpha premultiplication sounds feasible, though I'm not sure whether we both understand the same by alpha premultiplication here, because no pixel averaging occurs, it's just randomization. As you defined it, we can never change zero alpha to nonzero (it would definitely reveal the color) but can do the opposite, thus I don't understand the darkening -- instead, it would decrease the total opacity of the image. In fact, I don't even understand why premultiplication should be used at all. It would affect the color of low-opacity areas much more than the color of high-opacity areas (OTHO it seems to counterwork the darkening of bright areas and lightening of dark areas, inevitable due the same probability distribution cut-off). And it wouldn't save us from handling the alpha == 0 case specially anyway. Let's compute it. Instead of working with C (channel) and A (alpha), we have C.A and A. Adding noise to them we get C.A+e1 and A+e2, where e1 and e2 are some [small] random numbers. Alpha separation then gives (C.A+e1)/(A+e2) and A+e2. Is this you mean? (I suppose no.) Or should e1 distribution sigma be proportional to A? Personally, I would not premultiply anything, just treat zero alpha specially and don't allow it to become nonzero.
No, I meant working with the pre-multiplied alpha model. The GIMP works with post-multiplied alpha: during layer composition (for example), the RGB values stored in the image have to be multiplied by the alpha value. Some algorithms are designed to work with the pre-multiplied alpha model, in which the RGB values stored in the image have already been multiplied by the alpha value. The post-multiplied alpha model gives a greater precision, while the pre-multiplied alpha model makes some operations faster (that's why it is used frequently in games, for example). In additions, some algorithms work better with the pre-multiplied alpha model because of some assumptions they make about the relative color values of opaque and semi-transparent pixels. It think that the Noisify algorithm is one of them. Essentially, what I am suggesting is this: 1 - convert all pixels from post-multiplied to pre-multiplied alpha: {Ra,Ga,Ba} = {R,G,B} * A / 255 2 - apply the algorithm to the image using {Ra,Ga,Ba,A} 3 - convert back to the post-multiplied alpha model: if (A > 0) then {R,G,B} = {Ra,Ga,Ba} * 255 / A So the color values of the partially of fully transparent pixels will be lowered proportionally to their transparency. This will lower their relative importance and will also make some pixels darker because the fully transparent pixels will have R,G,B = 0. There is no need to have a special case for transparent pixels as you suggested, because this would only shift the problem to the pixels that are 99% transparent. The solution based on pre-multiplied alpha works in all cases and will give predictable results.
> 2 - apply the algorithm to the image using {Ra,Ga,Ba,A} And that's exactly the problematic point. *if* you applied the original algorithm to Ra,Ga,Ba,A, it would do *exactly* what I described above, including the fact it would *still* reveal the color of transparent pixels, only in a slightly different manner than now. We both know what premultiplied alpha is. But one of us doesn't understand what the plug-in does and what ,,alpha randomization`` means (maybe it's me, but it doesn't seem so). This plug-in does NO PIXEL AVERAGING (addition) as I wrote in the very first comment. In fact, it inherently reveals the fact image is composed of R, G, B, and Alpha and allows to *change on alpha channel independently on color channels*. Thus it will always either behave illogically or make the color of transparent pixels visible [intentionally]. The only question is, should it behave illogically or ,,incorrectly``?
To put it simply: if you didn't treat alpha == 0 specially, it would randomize zero alpha to nonzero, what then happens to color channels is another problem. Is this clear?
I guess that I was not clear enough in what I wrote above: I know that we will still have some pixels with A=0 (or close to 0) that will get a non-zero value when some noise is added to them. But the advantage of working with pre-multiplied alpha is that the final result is fully predictable: instead of revealing some random color that may have been there before, the only color that can be revealed is black.
I think that we had some misunderstanding about the goals here. My goal is _not_ to fix the algorithm in such a way that it behaves logically or correctly in all cases. As you and Sven correctly pointed out, this is not possible. I know that, but this is not the main problem. The bug that I am trying to fix here is that if you start with two images that look exactly the same but have different contents in the transparent pixels (or almost transparent), then you currently get different results. This is a bug because the result should always be the same. Working with pre-multiplied alpha would ensure that you get the same results in all cases (unless I missed something important, but I think that my reasoning is correct).
If the only thing you want is to give pixels some random but deterministic color X when changing their alpha from zero to nonzero, then premultiplied alpha is overkill -- I'll do it more easily (and using much less CPU) by treating alpha == 0 specially. But I still don't understand two things: 1. How this plug-in differs from Curves tool, except for operating on individual pixels instead of image as a whole (if you change Curves tool to do the same as you propose here as a result of this discussion, I'll have to use a fork of Gimp from that day, sigh). 2. How almost invisible pixels of opacity 1/255 (whose color is retained and possibly made visible anyway, using premultiplication or not) differ from completely invisible pixels of opacity 0? Why should such a small change in invisibility cause such a big discrimination?
The proposed solution would 1. Break some cases where current behaviour is useful (i.e., when the transparent pixels have some reasonable color and one wants to make some of them visible again (the word again is probably crucial here)). 2. Not solve the problem you are trying to solve. I took the test2.xcf image, change opacity of all transparent pixels from 0 to 1 (i.e. 1/255). The image looks the same and I can't see any trace of green/red not even against the white background (maybe someone can, in ideal conditions). But this image exhibits the same symptoms and would continue doing so after the "fix". Moreover, I could add a third, really transparent area, which would supprisingly turn black [after the "fix"], so we would have green, red, *and* black then.
Yes, converting the whole image to pre-multiplied alpha and back takes more work than having a special case for alpha=0. On the other hand, it will work correctly for alpha=1 or other values close to 0, while the special-case trick would not work. That's the advantage of using pre-multiplied alpha: it works for all alpha values and it produces a result that can be explained in all cases (the color that is "amplified" is always black instead of some random color). I don't think that it is ever appropriate to reveal a color that should not be seen and that depends on the internal implementation (e.g., if the GIMP was always working with pre-multiplied alpha, then these colors would not be available in the first place). So from my point of view, it is clearly a bug to allow some internal data to affect the visible result for the user (we already have the "anti-erase" mode for that, anyway).
By the way, please re-read the algorithm that I suggested above: I wrote specifically that I do not want to have any special case for alpha=0. That's the whole point of using pre-multiplied alpha. So I do not understand why you say that it would not work for alpha=1, because this is precisely what I am trying to fix.
OK, it always amplify black, I'm sorry. Well, not always. It genrally does ugly things to lightness. Suppose we randomize only alpha. Then a new color channel value is (after transformation to premultiplied alpha and back) C_new = C.A/(A+e), where e is some random number (positive or negative). When e (the randomization of alpha) becomes approximately equal to A (big randomization, small A, or just bad luck), lightness of resulting color can be arbitrary (of cousre, it will be also more transparent then). But no one asked for lightness randomization. No one asked for brightening of low-opacity areas and darkening of high-opacity areas. We asked for *opacity only* randomization. You can't implement this in premultiplied alpha [directly]. For me, transparent pixels are not pixels without any other property. And they are definitely not black. These poor pixels are not guilty of any crime, they are just transparent, their properties are not visible. Pixel opacity affects its interaction with the rest of the world. But as long as we work with a one individual pixel, it's transparency only affects wheter we see it better or worse. We can set it to zero to not see it at all and then change it back to see it again. I suggest removal of the Alpha slider and alpha randomization at all. (I'll stick with the current version for personal use, what can I do -- except filling a bugreport about useful functionality the new version will miss). The current version is useful (yes, I definitely want to make the color of transparent pixels visible *in* *this* *particular* *case*, because it's exactly what it's intended for; see above for applications), but it may confuse clueless users. Thus it's politically (only politically) incorrect and thus outlawed. The version you propose is nice, deterministic, politically correct, but absolutely useless. I can't see why anyone could ever want to apply such a filter to an image. Except for RGB images where it's identical to current version.
One more methaphore. This plug-in (and e.g. the Curves tools too) allows users to consciously and intentionally make invisible things visible. Anyone with a brain knows when something is invisible he/she doesn't know how it looks. He/she then can't excpet anything and can't be surprised how it looks when he/she manages to reveal it. Wouldn't you be surprised if you realized all inivisible things looks the same (say, are yellow cylinders of diameter 30cm and height 1m) when they become visible? Even when they was different -- a dog, a waterfall, a planet -- before they became invisible?
Created attachment 16045 [details] [review] Proposed politically correct patch
Attached a patch against 1.3.14. It makes it operate only on Red, Green, Blue or Gray channels, even on images with an Alpha channel. Since the only thing what one actually could want to do with the Alpha channel is what it did always, and this behaviour is considered a bug, the only remaining option is to keep the Alpha channel intact. So it keeps it intact.
Raphael, do you agree that this patch is the right way of solving this problem?
Yes, this patch solves the problem, but it introduces an API change (removal of one parameter) which is not appropriate for the 1.2 branch. So this patch would be OK for CVS HEAD, but I propose a simpler solution for the 1.2 branch: keep the code as it is, but change the initial values for the parameters from: { 0.20, 0.20, 0.20, 0.20 } to: { 0.20, 0.20, 0.20, 0.0 } In other words, keep the sliders for all channels including the alpha channel, but set the default to 0 for alpha.
Setting milestone to 2.0 so that this bug doesn't get lonely away from all its friends (see bug #70335). We should look into committing some patches against these bugs. Dave.
Actually, having read the entirity of this bug report, I believe that this should be closed NOTABUG. Dave.
Well, obviously I would disagree with a resolution of NOTABUG: looking at the screenshot attached above (fifth comment) shows immediately that the plug-in reveals some information that should never be revealed. The plug-in is supposed to generate some noise and this noise should be independent of "hidden" internal data (R,G,B values when A is 0). That internal data depends on the previous history of the image and on the way the GIMP handles fully transparent areas (e.g. tile swapping optimizations). To summarize what has been discussed above, there are several solutions for removing this incorrect usage of internal data: 1. Work with pre-multiplied alpha when adding the noise. This means converting to pre-multiplied alpha, applying the original noise algorithm to the modified pixel, then converting back to post-multiplied alpha. As a result, increasing the opacity of a fully transparent pixel could only make it more black, not "more like the hidden RGB data". This also works well for the pixels that are almost fully transparent (A=1, A=2, ...) because working with pre-multiplied alpha ensures that only a very limited amount of the almost invisible color will be made visible. => Added code: 2 times 3 multiplications for each pixel. 2. Treat alpha=0 as a special case, as suggested by Yeti. In this case, the RGB data in fully transparent pixels could be replaced by an arbitrary color (e.g., black) or by a random color. This would be more CPU-efficient than the previous solution, but this would introduce a significant difference between the pixels with A=0 and those with low alpha values (A=1, A=2, ...). => Added code: 1 test for each pixel, and 3 assignments from a constant or from a random value if the branch is followed. 3. Never increase the opacity. Only decrease it. In other words, the noise in the alpha channel can only go in one direction. => Added code: 1 test for each pixel. 4. Do not touch the alpha channel at all. This is what is implemented by the patch supplied above. Although this removes one feature of this plug-in (noise in the alpha channel), it could be simulated by adding a layer mask and applying the noise to that mask. This would have the same effect as the previous solution: never increase the opacity. => Added code: none. Any of these solutions would be suitable. I won't argue against 2 anymore. ;-) Solution 4 is a bit excessive, IMHO. Note: I have changed the misleading summary of this bug report. There is no resampling, only incorrect handling of the alpha channel.
Created attachment 19404 [details] [review] proposed patch for 1.2 branch only: change plug-in defaults, no other API or code change
Why is the patch for 1.2 only ?
Because this is the patch that does not change the behavior of the plug-in in a significant way, so it is suitable for the stable branch. I am now working on another patch for CVS HEAD.
Bumping the milestone for this bug, and all that depend on it, to 2.2. In the absence of someone actively working on this, there is no point keeping it on the 2.0 milestone. If someone wants to work on this, fixes to this family would also be accepted on the stable branch. Dave.
Comment on attachment 16045 [details] [review] Proposed politically correct patch Based on yeti's comments, this patch needs-work. Given that we now have some nifty queries to allow anyone to find patches which need review, I'm marking this needs-work to indicate it has actually been reviewed, so it won't show up in said query (patch-status.cgi). Add me to cc and scream if there are problems or questions. :)
Based on the comments here, and a bit of thought, I made a few changes in noisify.c, as follows: 1) Set default noise level for alpha channel to 0. 2) Make "Independent" checkbutton apply only to RGB channels, not alpha. 3) Do not show "Indpendent" checkbutton for grayscale drawables. In my opinion this is an adequate fix. There is no way now for the user to randomize opacity without specifically choosing to do so, and a user who does this should not be too surprised to sometimes get weird results. I also took the bold step of changing the menu entry from "Noisify" to "Scatter RGB", on the grounds that Noisify is the stupidest title ever. But I will revert this if there are objections. 2004-06-21 Bill Skaggs <weskaggs@primate.ucdavis.edu> * plug-ins/common/noisify.c: changed handling of alpha channel in an attempt to deal with bug #72853. Changed menu entry from "Noisify" to "Scatter RGB".
Is "Scatter RGB" really a better name? I admit that I am not a native speaker but I cannot imagine what "Scatter RGB" would do. If Noisify is stupid, perhaps we can find a better name?
Well, "Scatter RGB" corresponds to the existing "Scatter HSV", and does the same thing, except in RGB space rather than HSV space. Perhaps "Randomize", "Tweak", "Perturb", or some other variant would be better than "Scatter", though.
Pitty it is in the noise menu otherwise "Add noise" would have been a good description.
Closing as FIXED. Thanks to weskaggs for clearing this up.