GNOME Bugzilla – Bug 93856
IIR and RLE Blur blur inconsistently
Last modified: 2006-02-13 09:29:56 UTC
Open the .xcf.bz2 attached file. - Go to the channel dialog. - Select "sel1" - Channel to selection - Go back to "Background" - Blur IIR: 100 (no changes appear - I expect that) - Go to the channel dialog. - Select "sel2" - Channel to selection - Go back to "Background" - Blur IIR: 100 (The selection is filled with a gradient - I don't expect that) How does the IIR blur function work? AirBete. ps: Sorry if this is trivial, but i'm just unable to blur a selected region without having the external pixels interfere with it. :-( Am I that stupid?
Created attachment 11208 [details] Test file (.xcf.bz2)
Not sure if this is a bug or a feature. What would you expect? How should the plug-in handle pixels at the selection border? What about pixels that are half-way selected? I'd say the behaviour is correct.
Well, if he moves the right selection to the left a bit, he gets the same result than with left one. The detail is the distance. Only problem I can think of is Radius is not really that, but Diametre.
The radius is not the radius of a circle but the radius of a gauss bell. Unlike with a circle function, the function value of a gaussian functions bell is not 0 at this point.
You can retry the experiment with radius=1000 if you like, you'll get the same result. With the selection on the right, nothing happens, like if the plug-in had no idea of what exists outside the selected area. That's what I expect. If I select a region to perform an action on it, I don't think that information outside the region should interfere with the process. On the other hand, with the region in the upper left corner, the result is fundamentaly different. Although every pixel in the selected region is pure white, I end up with shades of grey after the blur. This is no sens to me. By the way, in the exemple I provided, both regions only have fully selected pixels or fully deselected pixels. In the case a region would contain a half-way selected pixel, I would expect the plug-in to use it in its calculation and then merge the resulting pixel with the original one according to the selection "transparency". The way it is right now, I never know when I select a region if the surrounding pixels will be used in a blur. That's very annoying. When I want to blur the background of a picture to make a foreground object stand out, the selection has to be strictly taken into account by the plug-in, otherwise foreground pixels are merged with the background. Thanks.
About the example file, I think I got it, seems the filter does not take as input all the required pixels, but just a tight bounding box of the mask. By moving I managed to get the black inside the box (bottom selection has a spike to the left). About interference, selections decide what changes, not what will be used as input. For that you will have to pass to another layer with the areas you do not want to contribute left as transparent (and wait for the fix about transparent pixels affecting computations, see bug 70335). Use quickmask to view the selection "transparency". Dunno why, but I guess other filters will have the same problem about the bounding box.
Created attachment 12351 [details] [review] gauss-iir-mask-test.patch, first test, padding is hardcoded to 200
Err, the patch is for CVS of 1.3. Sorry. :]
Thank you for pointing me to bug #70335 and for your explanation about input and output pixels.
This bug should be set to new, and as soon as I (or someone) decide what security area is OK, to fixed. I have a experimental value, based in inspecting values with colour picker, but I guess a more formal one would be better.
Confirming this bug report. If anyone has some spare time, it would be interesting to investigate if other plug-ins or tools are affected by the same bug, like I did for bug #70335. If the same problem occurs in several plug-ins, we should open a new tracking bug.
Created attachment 14620 [details] [review] Padding is dynamic plus some extra fixed area for protection
Updated version of the patch just in case anybody wants to keep on investigating this problem, I can not.
I only had a quick look and I think that the patch does not address the problem. The real problem is that the blur algorithm is performed on the bounding box of the selection. This box can include areas which are not selected. The pixels in that area should not be taken into account for the blur operation, they should be treated as transparent. Every pixel needs to be weighted by its value in the selection mask.
That's why I suggested opening a new tracking bug. There are probably more plug-ins that forget to weight every pixel by its selection mask. Maybe they already weight by opacity, but not by the selection mask. They should use the product of both. So bug #70335 was about the alpha channel, and the new tracking bug would be about the selection mask.
Well, we would first have to figure out if weighting by the selection is the right thing to do at all. After some discussion on #gimp I am now convinced that it would be wrong.
I don't think that the selection should (usually) have an effect on the weighting of the input pixels, it should determine what pixels are affected by an operation. So in the attached example the behaviour in the upper left selection is kind of correct, while the lower right selection gets blurred wrongly. IMHO there is no tracking bug needed until we discover further plugins/tools that operate inconsistently.
Changed target milestone of several bugs to 2.0 - these are not features, and don't look to me like blockers for a pre-release, but they have to be addressed before 2.0. Dave.
Is this a bug or not? Dave.
IMO this is a bug. Either it should take the whole image for blurring (in which case the behaviour with sel1 would be incorrect) or it should weight the pixels to blur with the selection (in which case the behaviour with sel2 is the wrong one). A behaviour similar to the latter was already implemented in the Colorcube Analysis plug-in, and I think it will run faster in most cases. I'm not aware of the objections about weghting pixels with the selection for IIR application, but I'd like to know about them.
So, there are two possible behaviours: 1) blur the selection with respect to the whole image 2) weight the input pixels with the mask The following patch implements behaviour (1). Implementing behaviour (2) requires considerably more effort which I will postpone until there is consensus on the best approach to take. Note that the RLE Gaussian Blur is affected by the very same problem. Changing the title accordingly.
Created attachment 23307 [details] [review] Patch implementing behaviour (1) in IIR filter
This approach will unresaonably slow down the blur plug-in when run on small selections. Perhaps we could instead increase the selected area by the blur radius, of course limiting it to the drawable dimensions. Actually I'd prefer solution (2) though.
I've been unable to implement the fix for behaviour (2). I believe that doing it properly will require a deeper understanding about the workflow of the algorithm than I have now. Simply premultiplying with the selection as it's currently done with the alpha leads to the introduction of black blurred pixels near the frontier of the selection. I give up. Suggest bumping milestone to 2.0.1 or 2.2 since this does not look as a blocker.
Guillermo's patch is a lot more reasonable than the one that Pedro attached. It is still not clear how large the area needs to be extended and Guillermo's patch uses some magic values there but if we don't come to a better solution, we should probably apply it. What I don't understand about Guillermo's patch is why he calls gimp_drawable_update() on the padded area. It should be sufficient to flush the unpadded area, the bounding box of the selection.
Moving to the 2.0.1 milestone as Pedro suggested. However this shouldn't keep anyone from posting an updated patch before 2.0.
A general principle is needed here, and I think it should be this: for hard selections, the edge of the selection should be treated like the edge of the image. That is, pixels outside the selection should be treated as if they did not exist. For many filters, the edge of the image requires special handling, which often takes some thought to get right. The same type of special handling should be applied wrt the selection. For fuzzy selections, it is up to the filter how to handle the fuzzy regions, but it should be done in a way that does not introduce discontinuities if possible. Incidentally, the majority of plugins I have looked at handle these things poorly or not at all.
I suggest we move this to the 2.2 milestone and apply the patch (or a modified version) as soon as we branched. Any objections?
If you can identify specific things that need modifying, I think this patch could be applied immediately once they're done. There's no reason to bump this to an unstable release, since it's just a bug-fix. Cheers, Dave.
I outlined the (small) issue I see with the patch, yet there hasn't been an update or any comment on this.
Created attachment 26333 [details] [review] updated and slightly cleaned up version of Guillermo's patch
The patch I attached is basically Guillermo's patch but applied to current CVS without the variable declarations that the old patch introduced and w/o changing the area of gimp_drawable_update().
I've run the test case szenario described when the bug was created. After applying the patch, blurring sel2 seems to work well, but now blurring sel1 shows the problems that were originally reported for sel2. This seems to indicate that the patch does not fix the problem. I suggest we bump this to 2.2 then.
Bumping to 2.0.2 for now...
Maybe I can give a little insight into what is going on here. You see, mathematically a Gaussian blur has a very special property that allows it to be implemented in a very efficient way in 2D: the property that exp(-x^2)*exp(-y^2)=exp(-(x^2+y^2)). This property permits the 2D convolution to be implemented by two 1D convolutions, first vertically on columns, then horizontally on rows -- which is orders of magnitude faster than doing the full 2D convolution. Unfortunately, the validity of this depends on all pixels being weighted equally. If they are not, the mathematical validity of the algorithm breaks down, and at a practical level, the algorithm will produce visible artifacts in some situations. I don't believe there exists a really good solution for this. The alternatives are (a) do it correctly, but slow it down by a factor of possibly 100 or more; (b) use the algorithm anyhow, and try to minimize the artifacts. The premultiplying hack I see as an attempt at (b), and it may not be possible to do much better. In any case, the bottom line is that there is essentially zero chance of finding a solution that just works great in all situations by fooling around with the algorithm.
Created attachment 26800 [details] [review] Patch implementing behaviour (2) according to comment #21 Alas, I was unaware of the ongoing work by Bill in bug #134088. Anyway here's a patch that performs the premultiplication trick. It is quite big because it involves inserting the selection data in the buffer together with the alpha, and all four combinations of alpha/alphaless and selection/selectionless must be handled (separately, for speed). The selection data is used to premultiply, then it's convolved just as the rest, then used to separate. While on it, the patch fixes two issues: lack of precision (it operates on doubles where possible, i.e. at all times except in the final stage of each pass), and a darkening of pixels with alpha during the vertical pass. There are two issues remaining which were also present before the patch and may be worth separate reports. On one side, the pixels at the borders of the image (or selection's bounding box) have a very strong influence in the blurred result. RLE is also affected. The other issue is that with radius < 1.0 (very noticeable with radius = 0.1) the image gets lightened up. This one is NOT reproducable with RLE, which apparently does the right thing.
Created attachment 26830 [details] test case (.xcf.gz) Please try the patched version, with blur radius 50, on this image to see if it behaves reasonably.
I have doubts about selection working as input modulator is the right solution, cos for some cases it is nice and not for others (that depends if you want bleeding or not).
Created attachment 26833 [details] result of blurring test file with patched iir, radius=50,50 I'll answer my own question. The attached xcf shows the result of blurring with the patched version of gauss_iir. The following xcf shows the result of blurring with gauss_rle -- unpatched; the unpatched gauss_iir would look identical. The problem arises because of the way the algorithm works, first blurring vertically on columns, then horizontally on rows. If you rotate the example xcf by 90 degrees and then blur it (using the patched gauss_iir), the result will look completely different.
Created attachment 26834 [details] same as above, but using unpatched gauss_rle
This comment is basically a reminder that some action should be taken here. The reminder is coming from me because I have created a merged version of IIR and RLE, which is on hold until the current bug is resolved. Let me summarize the current state of affairs, in what I hope is an uncontroversial way. Pedro has created a patch that improves the behavior of the plug-in in important ways but also introduces some problems. Improvements: 1) Pixels outside the selection no longer affect the result of blurring. 2) Transparent areas no longer cause darkening of pixels near them. Problems: 1) The perfect rotational symmetry of blurring is now lost when there is a selection and/or transparency. 2) Visible horizontal streaks appear when highly contrastive areas are located directly above or below "holes" (i.e., unselected or transparent areas). 3) The patch is rather complex and may have other side effects that are not yet apparent. As I have argued above, the mathematics of the blurring algorithm imply that a perfect solution is not possible, so there is no use in waiting for one to appear. Thus, there are basically three possible approaches: A) Accept Pedro's patch and also apply it to RLE. B) Decline the patch with the option of reconsidering it later. C) Try to improve the patch. (But how?)
I'm convinced that weighting input pixels by how much they are selected is generally the wrong behavior for plug-ins, because then the meaning of the selection is overloaded; it is used to specify both the weight of the input and the amount of blending of the output with the old values. While this may be reasonable for some plug-ins, it's not true in the general case. In other words, for any given plugin, the input should not be weighted by the selection unless a strong case can be made that weighting by the selection is the right thing to do. Otherwise, if arbitrary weighting is desired, some other means should be used, such as requiring a layer whose value represents the weight. This is essentially what we do with bumpmap, for instance. In this case, since the meaning of bluring implies blending a pixel with its neighbor, I can imagine that a lot of the time a user wants the selected pixels to be blurred with their neighbors, even though those pixels are not themselves blurred. I agree, however that transparent pixels should not affect the outcome of the blur on the RGB channels.
OK, I'd say we go for weighting with the pixels transparency then.
We shouldn't change the behaviour in the stable branch, so I am moving this to the 2.2 milestone.
The two blur plug-ins have now been merged (see bug #134088) so we will need a new patch no matter what we decided or will decide.
Based on comment #35, I'm proposing WONT-FIX for this. It appears to me as though fixing this bug will cause others, or will cause major performance issues with a plug-in which is already pretty slow.
I tend to agree with Bolsh here.
I agree as well.
*** Bug 326697 has been marked as a duplicate of this bug. ***