GNOME Bugzilla – Bug 326697
Blur is not mathematically accurate
Last modified: 2008-01-15 13:04:00 UTC
Please describe the problem: When you run a blur command on a graphic from the filters, using any method, the colors produced are not exact and sometimes have weird results. Steps to reproduce: 1. Go to file > new. Choose a transparent fill and a 256x256 image (any size will do, but 256x256 is sufficient). 2. Use the ellipse selection tool and make a 256x256 circle (or the image size). Having antialiasing active doesn't matter, but helps with this. 3. Change the color to 621DE9 for the hex color or some screwy color value like this. 4. Bucket fill your circle with this color. Select none (click in the open without dragging or use the select menu). 5. Run Gaussian blur on your circle. The default setting (5x5) is acceptable, but higher settings, like 13x13 are better. 6. Use the eyedropper tool and take note of the colors in the eye dropper tool window. Actual results: The colors shown will not match the original color at all. Where the color is transparent is where it gets really weird. Using the color mentioned above where the alpha is 16, the color shown is 5f0fef, not even the original color. Where the alpha is 3, the color shown is 5500aa. Where the alpha is 255, full intensity (not transparent), even the color from here does not match the original. Then, go to the channels dialogue and make it so only the alpha can change and bucketfill the entire image and take note of the edges. The edges look very weird. Expected results: The RGB color shown should always be the same if on a fully transparent background. If two different colors are used, then calculations should be present for the resulting RGB color. Does this happen every time? It happens regardless of the blur method used and the radius size used. Other information: The formula for averaging colors with transparent colors on a solid background is this: MCOL=(COLA*ALP+COLB*(255-ALP))/255; COLA is the top layer color and COLB is the color beneath processed bottom to top. ALP is the alpha of the top color layer (from 0 to 255, not as a percentage). In gradients, it's the "parts per 255" value from the start towards the end, used in blurs. This is repeated for the red, green, and blue color channels individually. The results are rounded to the nearest integer (by adding half and dropping the trash after the decimal). I have proven this formula as I use it constantly to calculate fog, lighting, and a few other things in graphics. I don't yet have the formula for calculating the merging of two transparent colors though (I can calculate the resulting alpha, however, but not the RGB color). The resulting alpha from merging two transparent colors (also used in blurs), is this: RA = (FG/255 * (1-BG/255) + BG/255) * 255; RA is the resulting alpha you get. FG is the foreground color's alpha (the higher layer) and BG is the background color's alpha (the next lower layer). The result is also rounded to the nearest integer. The formula does kind of need to be simplified, but I don't see a way to simplify it.
This seems to be a duplicate of bug #70335 . I am not sure if it describes a new issue. Please read the other bug and tell us if we can resolve this one as a duplicate. Thanks. Setting NEEDINFO.
Reading through that bug, I'm at least 96% certain that this bug report is a duplicate of that bug. The only plug-in that is correct mathematically is the scale feature (linear scaling). The gradient tool is imprecise as well, but not as bad as the blur plug-in is and others. However, different from that bug (but very slightly) is that I've given more details on the output. If alpha was x, the color used would be based as if there were x+1 channels. That is, note the value for the alpha of 3 above. The values shown will be 00, 55, AA, or FF, 4 channels. For the alpha of 16 (as shown above), the values shown will end in F (but shouldn't as rounding should be involved), and has 17 channels: 00, 10, 20, 30, 40, 50, 60, 70, 80, 8F, 9F, AF, BF, CF, DF, EF, and FF. This, perhaps may be a clue to help with this. The trash after the decimal from the resulting division is seemingly ignored in some cases.
Well, if I understand correctly, this is something that has come up before. Let me refer you to bug 93856, specifically comment #35.
I suggest closing this bug. Either WONTFIX or mark it as a duplicate of bug #93856 or bug #72848 (not the tracking bug #70335). Then, if the reporter is still interested in a "mathematically correct" blur, it would be better to open a new bug report (enhancement) suggesting to add an option to the blur plug-in for "correct but very slow blur". This should be an option (otherwise the report would be a direct duplicate of bug #93856) and it should be off by default because the performance penalty would be huge. If the user activates this option, the blur plug-in would do a real 2D convolution with correct weighting of the alpha channel (without pre-multiplying alpha). It would be some orders of magnitude slower than the default algorithm, but this could be useful for small images.
It shouldn't actually be all that much slower. Here's how I see it on how it could go faster (although it will take more memory). 1. A copy of the original image is created as the base to use. 2. The pixel in the top left corner (0, 0) is used and pixels in a square going the radius distance from it. That is, if the radius was 3, from this pixel, at 0, 0, pixels at (3, 3), (1, 2), etc. are taken. Those to the top left where (-3, -3) would be copy the related pixels within the actual image. These pixels used for the average should be stored in memory (there's very few of them, so it'd be usually less than a kilobyte, nothing significant). 3. From this, these pixels are averaged out into a single pixel and this single pixel is painted onto the main graphic (not the backup). The old instance in memory is removed from memory. 4. Steps 2 and 3 are repeated for the pixel at (1, 0) and so on. For pixels on the edge of the image, they are copied from the nearest pixel (that is, if it would go to (14, -2), this pixel would be the one from (14, 0), and one from (258, 228), assuming the image was 256x256, would copy the one at (255, 228)). This method shouldn't take as long. The original image, used to take pixels from and average, should be stored in memory in full or enough to have patches of a 1024x1024 image be available for creation. This way, only the output would use the tile caching system. 4 MB memory isn't all that much to today's systems. It would be the best work around that I can think of. Otherwise, this bug would be a duplicate of bug #72848 as you have suggested.
Well, the procedure you are describing would not produce a Gaussian blur -- it would produce a blur of a sort that GIMP does not currently implement, except in the case of radius=1, where it matches what the simple "blur" plug-in does. But in any case, if you want to pursue these ideas, you should download the code for the plug-in you want to improve, and hack on it. It is almost impossible to have good ideas about how things should be coded without having direct experience with the code. Be aware, though, that blurring is such a common and essential operation that any changes would need to be examined pretty rigorously before being accepted.
Closing this bug report (duplicate). Please open a new bug report if you want to discuss one of these enhancements: - Add an option to the Gaussian blur plug-in for performing a mathematically correct 2D convolution instead of two 1D convolutions. This would be very slow but could be useful for small images with partially transparent areas containing very different colors (e.g., icons). - Add a new type of blur (not Gaussian) using the method described in comment #5. Please use separate bug reports instead of continuing the discussion in this one so that each bug report deals only with one specific issue. *** This bug has been marked as a duplicate of 93856 ***